Jump to content

Cognitive semantics

From Wikipedia, the free encyclopedia

Cognitive semantics is part of the cognitive linguistics movement. Semantics is the study of linguistic meaning. Cognitive semantics holds that language is part of a more general human cognitive ability, and can therefore only describe the world as people conceive of it.[1] It is implicit that different linguistic communities conceive of simple things and processes in the world differently (different cultures), not necessarily some difference between a person's conceptual world and the real world (wrong beliefs).

The main tenets of cognitive semantics are:

  • That grammar manifests a conception of the world held in a culture;
  • That knowledge of language is acquired and contextual;
  • That the ability to use language draws upon general cognitive resources and not a special language module.[1]

Cognitive semantics has introduced innovations like prototype theory, conceptual metaphors, and frame semantics, and it is the linguistic paradigm/framework that since the 1980s has generated the most studies in lexical semantics.[2] As part of the field of cognitive linguistics, the cognitive semantics approach rejects the traditional separation of linguistics into phonology, morphology, syntax, pragmatics, etc. Instead, it divides semantics into meaning-construction and knowledge representation. Therefore, cognitive semantics studies much of the area traditionally devoted to pragmatics as well as semantics.

The techniques native to cognitive semantics are typically used in lexical studies such as those put forth by Leonard Talmy, George Lakoff and Dirk Geeraerts. Some cognitive semantic frameworks, such as that developed by Talmy, take into account syntactic structures as well.

Points of contrast

[edit]

As a field, semantics is interested in three big questions: what does it mean for units of language, called lexemes, to have "meaning"? What does it mean for sentences to have meaning? Finally, how is it that meaningful units fit together to compose complete sentences? These are the main points of inquiry behind studies into lexical semantics, structural semantics, and theories of compositionality (respectively). In each category, traditional theories seem to be at odds with those accounts provided by cognitive semanticists.

Classic theories in semantics (in the tradition of Alfred Tarski and Donald Davidson) have tended to explain the meaning of parts in terms of necessary and sufficient conditions, sentences in terms of truth-conditions, and composition in terms of propositional functions. Each of these positions is tightly related to the others. According to these traditional theories, the meaning of a particular sentence may be understood as the conditions under which the proposition conveyed by the sentence hold true. For instance, the expression "snow is white" is true if and only if snow is, in fact, white. Lexical units can be understood as holding meaning either by virtue of set of things they may apply to (called the "extension" of the word), or in terms of the common properties that hold between these things (called its "intension"). The intension provides an interlocutor with the necessary and sufficient conditions that let a thing qualify as a member of some lexical unit's extension. Roughly, propositional functions are those abstract instructions that guide the interpreter in taking the free variables in an open sentence and filling them in, resulting in a correct understanding of the sentence as a whole.

Meanwhile, cognitive semantic theories are typically built on the argument that lexical meaning is conceptual. That is, meaning is not necessarily reference to the entity or relation in some real or possible world. Instead, meaning corresponds with a concept held in the mind based on personal understanding. As a result, semantic facts like "All bachelors are unmarried males" are not treated as special facts about our language practices; rather, these facts are not distinct from encyclopaedic knowledge. In treating linguistic knowledge as being a piece with everyday knowledge, the question is raised: how can cognitive semantics explain paradigmatically semantic phenomena, like category structure? Set to the challenge, researchers have drawn upon theories from related fields, like cognitive psychology and cognitive anthropology. One proposal is to treat in order to explain category structure in terms of nodes in a knowledge network. One example of a theory from cognitive science that has made its way into the cognitive semantic mainstream is the theory of prototypes, which cognitive semanticists generally argue is the cause of polysemy. [citation needed]

Cognitive semanticists argue that truth-conditional semantics is unduly limited in its account of full sentence meaning. While they are not on the whole hostile to truth-conditional semantics, they point out that it has limited explanatory power. That is to say, it is limited to indicative sentences, and does not seem to offer any straightforward or intuitive way of treating (say) commands or expressions. By contrast, cognitive semantics seeks to capture the full range of grammatical moods by also making use of the notions of framing and mental spaces.

Another trait of cognitive semantics is the recognition that meaning is not fixed but a matter of construal and conventionalization. The processes of linguistic construal, it is argued, are the same psychological processes involved in the processing of encyclopaedic knowledge and in perception. This view has implications for the problem of compositionality. An account in cognitive semantics called the dynamic construal theory makes the claim that words themselves are without meaning: they have, at best, "default construals," which are really just ways of using words. Along these lines, cognitive semantics argues that compositionality can only be intelligible if pragmatic elements like context and intention are taken into consideration.[1]

The structure of concepts

[edit]

Cognitive semantics has sought to challenge traditional theories in two ways: first, by providing an account of the meaning of sentences by going beyond truth-conditional accounts; and second, by attempting to go beyond accounts of word meaning that appeal to necessary and sufficient conditions. It accomplishes both by examining the structure of concepts.

Frame semantics

[edit]

Frame semantics, developed by Charles J. Fillmore, attempts to explain meaning in terms of their relation to general understanding, not just in the terms laid out by truth-conditional semantics. Fillmore explains meaning in general (including the meaning of lexemes) in terms of "frames". By "frame" is meant any concept that can only be understood if a larger system of concepts is also understood.

Fillmore: framing

[edit]

Many pieces of linguistic evidence motivate the frame-semantic project. First, it has been noted that word meaning is an extension of our bodily and cultural experiences. For example, the notion of restaurant is associated with a series of concepts, like food, service, waiters, tables, and eating.[1] These rich-but-contingent associations cannot be captured by an analysis in terms of necessary and sufficient conditions, yet they still seem to be intimately related to our understanding of "restaurant".

Second, and more seriously, these conditions are not enough to account for asymmetries in the ways that words are used. According to a semantic feature analysis, there is nothing more to the meanings of "boy" and "girl" than:

  1. BOY [+MALE], [+YOUNG]
  2. GIRL [+FEMALE], [+YOUNG]

And there is surely some truth to this proposal. Indeed, cognitive semanticists understand the instances of the concept held by a given certain word may be said to exist in a schematic relation with the concept itself. And this is regarded as a legitimate approach to semantic analysis, so far as it goes.

However, linguists have found that language users regularly apply the terms "boy" and "girl" in ways that go beyond mere semantic features. That is, for instance, people tend to be more likely to consider a young female a "girl" (as opposed to "woman"), than they are to consider a borderline-young male a "boy" (as opposed to "man").[1] This fact suggests that there is a latent frame, made up of cultural attitudes, expectations, and background assumptions, which is part of word meaning. These background assumptions go up and beyond those necessary and sufficient conditions that correspond to a semantic feature account. Frame semantics, then, seeks to account for these puzzling features of lexical items in some systematic way.

Third, cognitive semanticists argue that truth-conditional semantics is incapable of dealing adequately with some aspects of the meanings at the level of the sentence. Take the following:

  1. You didn't spare me a day at the seaside; you deprived me of one.

In this case, the truth-conditions of the claim expressed by the antecedent in the sentence are not being denied by the proposition expressed after the clause. Instead, what is being denied is the way that the antecedent is framed.[1]

Finally, with the frame-semantic paradigm's analytical tools, the linguist is able to explain a wider range of semantic phenomena than they would be able to with only necessary and sufficient conditions. Some words have the same definitions or intensions, and the same extensions, but have subtly different domains. For example, the lexemes land and ground are synonyms, yet they naturally contrast with different things—sea and air, respectively.[1]

As we have seen, the frame semantic account is by no means limited to the study of lexemes—with it, researchers may examine expressions at more complex levels, including the level of the sentence (or, more precisely, the utterance). The notion of framing is regarded as being of the same cast as the pragmatic notion of background assumptions. Philosopher of language John Searle explains the latter by asking readers to consider sentences like "The cat is on the mat". For such a sentence to make any sense, the interpreter makes a series of assumptions: i.e., that there is gravity, the cat is parallel to the mat, and the two touch. For the sentence to be intelligible, the speaker supposes that the interpreter has an idealized or default frame in mind.

Langacker: profile and base

[edit]

An alternate strain of Fillmore's analysis can be found in the work of Ronald Langacker, who makes a distinction between the notions of profile and base. The profile is the concept symbolized by the word itself, while the base is the encyclopedic knowledge that the concept presupposes. For example, let the definition of "radius" be "a line segment that joins the center of a circle with any point on its circumference". If all we know of the concept radius is its profile, then we simply know that it is a line segment that is attached to something called the "circumference" in some greater whole called the "circle". That is to say, our understanding is fragmentary until the base concept of circle is firmly grasped.

When a single base supports a number of different profiles, then it can be called a "domain". For instance, the concept profiles of arc, center, and circumference are all in the domain of circle, because each uses the concept of circle as a base. We are then in a position to characterize the notion of a frame as being either the base of the concept profile, or (more generally) the domain that the profile is a part of.[1]

Categorization and cognition

[edit]
Membership of a graded class

A major divide in the approaches to cognitive semantics lies in the puzzle surrounding the nature of category structure. As mentioned in the previous section, semantic feature analyses fall short of accounting for the frames that categories may have. An alternative proposal would have to go beyond the minimalistic models given by classical accounts, and explain the richness of detail in meaning that language speakers attribute to categories.

Prototype theories, investigated by Eleanor Rosch, have given some reason to suppose that many natural lexical category structures are graded, i.e., they have prototypical members that are considered to "better fit" the category than other examples. For instance, robins are generally viewed as better examples of the category "bird" than, say, penguins. If this view of category structure is the case, then categories can be understood to have central and peripheral members, and not just be evaluated in terms of members and non-members.

In a related vein, George Lakoff, following the later Ludwig Wittgenstein, noted that some categories are only connected to one another by way of family resemblances. While some classical categories may exist, i.e., which are structured by necessary and sufficient conditions, there are at least two other kinds: generative and radial.

Generative categories can be formed by taking central cases and applying certain principles to designate category membership. The principle of similarity is one example of a rule that might generate a broader category from given prototypes.

Radial categories are categories where instances may share only a few or even a single aspect(s) of the qualities associated with the category as a whole. The concept of "mother", for example, may be explained in terms of a variety of conditions that may or may not be sufficient. Those conditions may include: being married, has always been female, gave birth to the child, supplied half the child's genes, is a caregiver, is married to the genetic father, is one generation older than the child, and is the legal guardian.[3] Any one of the above conditions might not be met: for instance, a "single mother" does not need to be married, and a "surrogate mother" does not necessarily provide nurturance. When these aspects collectively cluster together, they form a prototypical case of what it means to be a mother, but nevertheless they fail to outline the category crisply. Variations upon the central meaning are established by convention by the community of language users, and the resulting set of instances, many connected to the center by a single shared trait, are reminiscent of a wheel with a hub and spokes.

For Lakoff, prototype effects can be explained in large part due to the effects of idealized cognitive models. That is, domains are organized with an ideal notion of the world that may or may not fit reality. For example, the word "bachelor" is commonly defined as "unmarried adult male". However, this concept has been created with a particular ideal of what a bachelor is like: an adult, non-celibate, independent, socialized, and promiscuous. Reality might either strain the expectations of the concept, or create false positives. That is, people typically want to widen the meaning of "bachelor" to include exceptions like "a sexually active seventeen-year-old who lives alone and owns his own firm" (not technically an adult but seemingly still a bachelor), and this can be considered a kind of straining of the definition. Moreover, speakers would tend to want to exclude from the concept of bachelor certain false positives, such as those adult unmarried males that don't bear much resemblance to the ideal: i.e., the Pope, or Tarzan.[3] Prototype effects may also be explained as a function of either basic-level categorization and typicality, closeness to an ideal, or stereotyping.

So viewed, prototype theory seems to give an account of category structure. However, there are a number of criticisms of this interpretation of the data. Indeed, Rosch and Lakoff, themselves chief advocates of prototype theory, have emphasized in their later works that the findings of prototype theory do not necessarily tell us anything about category structure. Some theorists in the cognitive semantics tradition have challenged both classical and prototype accounts of category structure by proposing the dynamic construal account, where category structure is always created "on-line"—and so, that categories have no structure outside of the context of use.

Mental spaces

[edit]
Propositional attitudes in Fodor's presentation of truth-conditional semantics

In traditional semantics, the meaning of a sentence is the situation it represents, and the situation can be described in terms of the possible world that it would be true of. Moreover, sentence meanings may be dependent upon propositional attitudes: those features that are relative to someone's beliefs, desires, and mental states. The role of propositional attitudes in truth-conditional semantics is controversial.[4] However, by at least one line of argument, truth-conditional semantics seems to be able to capture the meaning of belief-sentences like "Frank believes that the Red Sox will win the next game" by appealing to propositional attitudes. The meaning of the overall proposition is described as a set of abstract conditions, wherein Frank holds a certain propositional attitude, and the attitude is itself a relationship between Frank and a particular proposition; and this proposition is the possible world where the Red Sox win the next game.[5]

Still, many theorists have grown dissatisfied with the inelegance and dubious ontology behind possible-worlds semantics. An alternative can be found in the work of Gilles Fauconnier. For Fauconnier, the meaning of a sentence can be derived from "mental spaces". Mental spaces are cognitive structures entirely in the minds of interlocutors. In his account, there are two kinds of mental space. The base space is used to describe reality (as it is understood by both interlocutors). Space builders (or built space) are those mental spaces that go beyond reality by addressing possible worlds, along with temporal expressions, fictional constructs, games, and so on.[1] Additionally, Fauconnier semantics distinguishes between roles and values. A semantic role is understood to be description of a category, while values are the instances that make up the category. (In this sense, the role-value distinction is a special case of the type-token distinction.)

Fauconnier argues that curious semantic constructions can be explained handily by the above apparatus. Take the following sentence:

  1. In 1929, the lady with white hair was blonde.

The semanticist must construct an explanation for the obvious fact that the above sentence is not contradictory. Fauconnier constructs his analysis by observing that there are two mental spaces (the present-space and the 1929-space). His access principle supposes that "a value in one space can be described by the role its counterpart in another space has, even if that role is invalid for the value in the first space".[1] So, to use the example above, the value in 1929-space is the blonde, while she is being described with the role of the lady with white hair in present-day space.

Conceptualization and construal

[edit]

As we have seen, cognitive semantics gives a treatment of issues in the construction of meaning both at the level of the sentence and the level of the lexeme in terms of the structure of concepts. However, it is not entirely clear what cognitive processes are at work in these accounts. Moreover, it is not clear how we might go about explaining the ways that concepts are actively employed in conversation. It appears to be the case that, if our project is to look at how linguistic strings convey different semantic content, we must first catalogue what cognitive processes are being used to do it. Researchers can satisfy both requirements by attending to the construal operations involved in language processing—that is to say, by investigating the ways that people structure their experiences through language.

Language is full of conventions that allow for subtle and nuanced conveyances of experience. To use an example that is readily at hand, framing is all-pervasive, and it may extend across the full breadth of linguistic data, extending from the most complex utterances, to tone, to word choice, to expressions derived from the composition of morphemes. Another example is image-schemata, which are ways that we structure and understand the elements of our experience driven by any given sense.

According to linguists William Croft and D. Alan Cruse, there are four broad cognitive abilities that play an active part in the construction of construals. They are: attention/salience, judgment/comparison, situatedness, and constitution/gestalt.[1] Each general category contains a number of subprocesses, each of which helps to explain the ways we encode experience into language in some unique way.

See also

[edit]

References

[edit]
  1. ^ a b c d e f g h i j k Croft, William and D. Alan Cruse (2004). Cognitive Linguistics. Cambridge: Cambridge University Press. pp. 1, 105, 7–15, 33–39. ISBN 9780521667708.
  2. ^ Geeraerts, Dirk (2010) Introduction, p. xiv, in Theories of Lexical Semantics
  3. ^ a b Lakoff, George (1987). Women, Fire, and Dangerous Things. University of Chicago Press. pp. 82–83, 70. ISBN 9780226468037.
  4. ^ Bunnin, Nicholas and E. P. Tsui-James (1999). The Blackwell Companion to Philosophy. Oxford: Blackwell. p. 109.
  5. ^ Fodor, Jerry. Propositional Attitudes.