Monday, May 9, 2011

Spontaneous Gestures - Part 3 of 3

(Continuation of Part 1 & 2)

McClave concluded her discussion of gestures by delineating their different dimensions: “In a nutshell, that is what gestures tell us, the manual gestures.” She then went on to explain, “We also looked at head movements, because it turns out that people move their heads in predictable ways in American culture.”

One example of head movement showing meaning was when people said anything that was inclusive such as “all,” “whole,” or “completely.” There would be a lateral sweep of the head on the word. McClave hypothesized this happened because with the earth’s gravity, many things are arranged horizontally.

McClave showed us video clips from a study conducted at CSUN with four of her graduate students. They set out to prove that the head sweep was worldwide, not just in America. They analyzed head movements of Arabic, Korean, and Bulgarian speakers. In addition, to rule out the influence of Hollywood, they studied the people of Turkana, a group of nomadic pastoralists in northwest Kenya. McClave concluded, “The data is showing around the world; we can’t claim it’s universal because no linguist would ever try to do that, but we can claim we’re seeing it all over the world with no exposure to American culture, to television or anything else. We’re finding this sweep, lateral sweep. What does this tell us? We hypothesize that humans can conceptualize something that is absent or even abstract as being positioned in space. This has amazing implications for ASL because this is exactly what ASL it still does. It positions something abstract in space and then makes the sign for it, sets that it up and then keeps pointing to that same space to represent that entity. Well it turns out, and by the way, everyone thought this was peculiar to sign languages. Sign language made this up. Turns out we’re all doing it. What probably happened, again the hypothesis is, signers are incredibly observant; they are watching you like no one else because they need every clue they can get for communication to be successful. So, it means that maybe what happened was signers? saw all of us doing this, were able to pick up on the nonverbal and then use these nonverbal cues that hearing have in order to grammaticize in ASL. Just a theory, but I’m sticking with it at this point.”

Another thing they discovered in their study of people around the world: speakers moved their heads or make gestures for each individual item on a list. McClave showed us more video clips of these examples.

McClave discussed intensification—lateral back and forth head movement—with us. Research by Chuck and Candy Goodwin and Manny Schegloff, at the University of California at Los Angeles, found that lateral head shakes mark assessments whether good or bad. Linguists assumed the origin was something with a negative comment. McClave postulates a different origin. She explained her hypothesis thusly, “The lateral sweep is to show an array of objects. Remember? We said, well, if it sweeps you’re going this way. Well, if you intensified something, there is more if it, right? So, then if you’re going back and forth, there’s just more and more to see.”

McClave continued, “. . . this is more evidence that we conceptualize abstract or absent entities as position in space, how the mind is actually working. By the way, this fits beautifully with theories from Berkeley where George Lakoff said that we think metaphorically, basically all of our thinking its metaphoric. So, this fits beautifully with this. We are positioning things in space, again supporting these theories. We also find that this relates to ASL. We found speakers showing image sizes depending on the head movements, the size of the entity. And by the way, what ASL does this too. If you’re giving something to a tall person in ASL you go up. If you’re giving something to a short person in ASL, you’re going down, to a child.” She then demonstrates giving something to a short person but looking up, explaining that it is ungrammatical in ASL to do that.

The last topic discussed was back-channeling—giving cues to a speaker that we are listening. Back-channeling was first noticed by Victor Yngve at the University of Chicago in 1970. It is assumed the listener back-channels when he or she feels like it. However, McClave discovered going over her data, speakers nod when there was nothing to nod at. She started to look at the listener. She commented, “The minute the speaker went down like this, within one second the listener would back-channel.” They found this in other languages.

McClave closed with, “What did all of this tell us? It tells us in the gesture world, we need to rethink utterance. To depend only on speech, you have no idea, really what, whether you’ve gotten the thought. A lot of the information is coming across on the gestural channel and we’re finding out that people, even though they are not aware it, are responding to it.”

Wednesday, May 4, 2011

Spontaneous Gestures - Part 2 of 3

(Continued from Part 1 of 3)

McClave reiterated that gestures are connected to the thought expressed. She micro analyzes at one-thirtieth of a second because video goes by at thirty frames per second. She and her team, “. . . found the gesture will line up with the co-expressive part of the speech, but, it always either precedes it or comes right down on that beat where that speech is co-expressive.”

Early misconceptions about the relationship between gesture and speech were that gesture translated your speech. McClave clarified, “The gesture researchers realized that we see features and gestures that never were in speech. Therefore, it can’t be possible that it’s translating the speech because it was never there to begin with in the speech.”

McClave commented “. . . the listener picks up the gesture and no longer knows how that information got to them, through the gesture or through the speech. It only becomes totally integrated the mind. We’ve done enough experiments on this to find that people are paying attention to the gesture, incorporating into their understanding of the interaction and no longer able to tell you what came through the gesture medium what came through the speech medium.”

McClave showed us a video clip and pointed out, “Now, before this, linguists were working which is working with audio data. They would ask “Guess?” jeans? He’s talking about Guess? jeans.” However, it’s not what he’s talking about, she continued, “He’s talking about the logo across his shirt. So, now, you if you ask an interlocutor what he said, he’d say ‘yeah, they were wearing this logo across their shirts’ . . . .”

After showing us several video clips, McClave responds that linguists started to realize there were different dimensions of gestures—iconic, metaphoric, abstraction, deictic, and beats.

In a video clip, we saw a woman using an iconic gesture when she’s “holding a child.” Another clip, we saw a man describing something abstract by using his own body to show location (self-marking).

We’re shown a metaphorical gesture in another video clip. McClave notes, “So, if you know something about sign language this will be interesting. He’s going to talk about the shine in a little child’s eye. That’s not a physical things, so, he’s going to gesture for it.” She stops the video and says, “So, he’s doing something very close to ASL and there’s more about the connection with ASL and gestures later.” In the fourth video clip, the speaker refers to “the cycle of life.” After we see the speaker sweep her right hand to the right, McClave comments, “Here linguists love this. Why? First of we got one metaphor in speech. “Cycle of life,” a metaphor. We’ve got a different metaphor on the hands. The body coming out with two simultaneous metaphors. . . .”

McClave pointed out, “The deictic dimension, linguistics divide into concrete and abstract pointing. Concrete means you’re actually pointing at somebody that is right there with them or is something in the environment. But it turns out much like ASL signers, hearing people point at things that are not there at all.”

In another video clip, McClave noted that a man points to someone not there when he says, “So I’m gonna ask this one lady.” McClave interjected that, “He’s going to go over to his left. In fact the head moves first and the point goes to left. This is amazing if you know ASL. ASL has grammaticized the same thing. Which leads us to believe, to postulate, maybe ASL is based on the observation of hearing people. But nobody knew hearing people were doing this when they started to do the linguistics of ASL.”

Beats were the last dimension of gesture described. McClave asserted, “It was originally assumed that beats emphasized particular syllables. That when you wanted to emphasize a syllable you came down on a beat.” However, she discovered that the beats occurred on unstressed syllables and during silence, manifested in rhythmic patterns. Therefore, if you have to come down on the tone unit nucleus, the thought is already in the brain—the whole utterance is build up around the word.

McClave mentioned Brian Butterworth, a researcher in England. He claimed gestures happened because we are looking for the right words. She noted, “Those gestures happen but they’re very rare. . . . So, based on the evidence I've shown you, David McNeil of the University Of Chicago has put forth a theory of how we actually think. He calls it the Growth Point Theory because we can predict that the gesture will match up with the word that is semantically related to. And then there’s a predictable timing relationship. He hypothesized that probably this word and this image were a unity in the brain to begin with. And that what’s within your mind and then you go and build up your utterance around this thought, basically.”

Next we watched a video of a woman discussing a location in the O’Hare Airport. McClave described how the woman’s utterance is build up around the image in her mind using the Growth Point Theory.

Sunday, May 1, 2011

Spontaneous Gestures - Part 1 of 3

On March 30, 2011, I attended a presentation presented by the CSUN Linguistics Club, “I Know What You’re Thinking: What Spontaneous Gestures Tell Us About Language and Thought,” by Evelyn McClave, Ph.D.

Dr. McClave is currently a professor of Linguistics and Deaf Studies at California State University at Northridge (CSUN). She is
 fluent in American Sign Language and has done linguistic research as it pertains 
to Deaf Studies.

The lecture focused on McClave’s research on gestures, particularly on as she explained, “. . . spontaneous gestures that hearing people make and don’t even think about what they’re doing. They are beyond the level conscious of recall.”

These gestures do not have any conventionalized forms. She ruled out emblems—the “thumbs up” gesture to represent approval or the index and thumb touch to form the “okay” symbol. McClave explained, “What I am looking at are these unconscious gestures and the basic assumptions we make are the same assumptions all scientists make and, number one, there are underlying patterns waiting to be discovered. Sure enough, we have found them. The second assumption that all of us make is that our body movements are not random. Believe it or not, we can actually predict what kind of body movements you’re going to make usually in certain circumstances.”

Early observations in the gesture field were made during the 1980s. Describing the first work by Adam Kendon, one of the top authorities on gestures, McClave said, “Speakers gesture, not listeners. So it tends to be the person talking who is moving his or her hands. Not the person/people who are listening.” She continued, “. . . gesture and speech are semantically coherent, co-expressive.” McClave demonstrated the gesture of holding a baseball bat but referred to a tennis match. She explained if this were done, the speaker would correct himself in the direction of the gesture, not the utterance—the gesture is always right.

McClave reiterated that gestures are connected to the thought expressed. She micro analyzes at one-thirtieth of a second because video goes by at thirty frames per second. She and her team, “. . . found the gesture will line up with the co-expressive part of the speech, but, it always either precedes it or comes right down on that beat where that speech is co-expressive.”