togog: Subproject P1

Gestural modes of representation in human and non-human primates

Researchers: Cornelia Müller, Ellen Fricke, Katja Liebal, Irene Mittelberg, Silva Ladewig, and Sedinha Teßendorf



We set out to demonstrate the following:

1. There are four basic modes of representation evidenced in gesture production: acting, modeling, drawing, and embodying (Müller 1998, Müller et al. in prep.)

2. The processes of cognitive semiosis for producing and understanding gestures entail:
- abstraction in general;
- image schemas (Mittelberg 2008);
- internal and external metonymy (Mittelberg & Waugh 2008);
- metaphor (Müller 2008);
- processes of type construction and semantic loading (Fricke 2008, in prep.).

3. Language is multimodal.

Müller’s account of the four basic modes of representation in human primates (acting, modeling, drawing, and embodying) underlying gesture creation is our starting point for giving a fully fledged account of the cognitive-semiotic processes driving the production and understanding of human gestures. Significant progress has been made towards an empirically based refinement of this model. Ladewig and Teßendorf have created a coding scheme, and (with the support of our student assistant, Benjamin Marienfeld) they coded gestures that occur in a broad range of different types of discourse as gestures vary with discourse type. As a preliminary result of this study, we have recognized the need to establish subcategories to account for the merging or combining of different modes of representation. We are currently working on a new classification system of the modes of representation that includes categories that describe what gestures embody, draw, model or reenact.

We are now embarking on the empirical study of modes of representation in non-human primates which addresses the core issue of the evolution of gestures as an early means of symbolic communication. Our neurological studies underline that the transition from action to gesture is by no means a trivial cognitive process – the pantomime of tool use requires additional left hemispheric activation as compared to the demonstration of tool use with a tool in the hand. Hence, pantomiming is not simply demonstrating without a tool, but requires the process of associating a representation of a mental object with a movement concept. The transition from action to gesture requires abstraction, and involves metonymy in the process of constructing a mental representation; in Müller's terms, this means that it involves creative abstraction.

Mittelberg and Fricke provide semiotic, functional and cognitive theoretical frameworks for our investigation of the processes of cognitive semiosis underlying gesture formation. So far, our focus of attention has been on how conceptual metonymy is involved in the motivation of metaphoric gestures and of cross-modal modes of indirect reference and pragmatic inferencing (Mittelberg), the role of conceptual metaphor and metonymy regarding the motivation of functional variations of a recurrent pragmatic gesture prominent in everyday Spanish conversation (Teßendorf), and Müller has proposed that the gestural modes of representation imply conceptualizations of perceived (and conceived) objects and events in the world and that their fundamental principle is conceptual metonymy. Metonymy is thus identified as a basic principle of gesture creation; it underlies gestures referring to concrete as well as to abstract entities and events.

In addition to investigating the role of metonymy in processes of gestural meaning creation, we are turning our attention to the role that image schemas may play. Bressem explores how image schemas may motivate recurrent gestural forms, Ladewig is concerned works on  how the image schema ‘cycle’ motivates the semantic core of a specific recurrent gesture (the cyclic gesture), and Mittelberg is working on geometric and image-schematic patterns in meta-grammatical gestures.

The issue of image and motor schemas and their role in gesture ‘making’ is highly pertinent to the interdisciplinary perspective of Togog. It relates directly to the results of the neurological studies on gestural modes of representation (see subproject P3 for more details), since it addresses the issue of the nature of the mental representation that has to be ‘added’ in gestures that demonstrate tool use or, in Hedda Lausberg’s terms, that perform a pantomime of tool use or, in terms of Müller’s modes of representation, are created on the basis of the acting mode. Our neurological findings could point to a difference in the type of mental representations underlying pantomimic tool use as compared to those involved in demonstrations of how to use a tool that is actually held in the hand. With regard to primate evolution, the particular image-schematic and motor-schematic patterns we do or do not find in non-human primates could offer traces of the evolution of mental representation insofar as they are related to symbol formation via gestural movement. Up to now, we have only identified one mode of gestural representation in non-human primates, namely the acting mode. The transition from action to gesture, which involves processes of ritualization and abstraction, thus remains a core issue.

This year, we have begun to integrate findings from sign language research, especially on classifiers. The theme session on “Gestures: A comparison of signed and spoken languages” (organized by Müller in collaboration with two sign language researchers, Ulrike Wrobel and Jens Heßmann) at the 30th Annual Convention of the German Society of Linguistics in Bamberg must be regarded as a historic event in two respects: (1) It broke a taboo in sign linguistics. Since sign languages are now regarded as fully fledged languages, their gestural dimensions are free to be pursued as research topics within the sign language community. (2) The German Society for Linguistics’ very acceptance of this theme session endorsed the recognition of gesture studies as a field of linguistic enquiry. It was the first time that the official institution of German linguistics had accepted that languages are (at least, are also) multimodal. With this achievement, we have made an important step towards achieving one of our core project goals: to show that language is inherently multimodal.