I feel like I should say something about
my title slide here being that I'm actually at San Diego State giving this talk. All of my talks start with
this slide with having Hepner Hall as the background regardless
of the title and where I am and I do that in part because I'm really
proud to be a faculty member at San Diego State promoting research at San Diego
State and also because this is such a beautiful iconic image of California that it will entice students and
collaborators to San Diego. So what I wanted to do today is
share with you some examples of what the study of
sign languages can tell us about the nature of human language and
about the brain. And I'm gonna start by just giving you a flavor of sign
languages around the world so the first example up here is a signer using American Sign Language
to give a short lecture about the structure of the brain. This other example here is actually up from the Netherlands, its part of a
very large corpus study being conducted by Onno Crasborn. It's an older woman telling a fairy tale in sign language of the Netherlands and this
project is looking at how language differs across generations what kinda dialectical variation areas
across the Netherlands and it's really using This corpus for linguistic and sociolinguistic research The example down here with a little boy
he is actually from my colleagues Diane Lillo-Martin and Ronice Quadros. This is
from Libras or Brazilian sign language. And they're studying how sign languages are acquired in different countries and how different languages acquired in he's basically his parents are deaf and
he's telling them about his day at school. And then the final examples here are
basically from linguistics articles so linguists who are saying different
languages around the world in all linguistics papers you give
examples from the language that you're studying and these are taken from DVD's and CD's of linguistic examples. Looking at how sign language is very across the world what is similar
basically sign language typology. And so what I'd like to do is raise some questions about what the
study of sign language is in all its forms and the study of deaf and hearing people who use them have to tell us. So one of the things that study sign languages tell us is what's
really universal to all human languages. So you can't make
statements about what's common to all languages
with out looking at sign languages. And I would argue that theories that can
account for both signed in spoken languages are to be preferred over theories that
really only focus or account for spoken language data or
theories that only account for sign language data. Sign languages can tell us about what aspects of human language are
shaped by their perceptual systems by audition
versus vision. So for example we know that the auditory
system is very good at fast temporal changes, 40 milliseconds, and so spoken languages tend to have
very a lot of linear structure, lots of segments, lots of morphemes or mini
units that can be combined in the linear structure. Sign language on the other hand,
vision is very good at taking information in simultaneously. So sign languages tend to have a lot
more simultaneous structure for example you can have information
conveyed linguistically on the face at the same time as you're producing
signs. Looking at sign languages also tells us how languages are shaped by their output systems. So sign languages are
produced by the hands movements in space you can
see the articulators. In comparison to speech where you can't
see the articulators. The tongue is inside the mouth. And it turns out that these have interesting implications for the
nature of sign. So there are signs that can look like actions for example, like "brush hair". It's hard to make a
word look like an action or sound like an action. You can do it but it's much more reduced than in sign languages. These properties also have an effect on how we talk about spatial
relationships something I'm quite interested in. You can place the hands in space
working very differently than for spoken languages. In these results from the sort
of input output mechanisms up speech versus sign. What I wanna do is I'm gonna raise sort of 3 questions
that come about by thinking about sign language is that I've selected these
questions because they can really show how sign language can be a useful
tool and understanding about the brain basis for language and
and the nature of language. So the first question I wanna ask is just
do all human languages represent meaning that is semantics independently
from form or phonology? Then the second question we will be looking at is this relationship between language in
pantomime when language looks like pantomime how
does the brain tell the difference between the two? And then finally this question about how the biology of language expression, how that might
affect the neural substrates. the neural basis for spatial language. Okay so let's start with the first
question. And the reason that this question comes up is because of something called iconicity, the
fact that signs often look like or have some
relationship to their meaning. I'll give you some examples. So these are from American Sign Language, so the first one the sign for hairbrush looks very much
like brushing your hair. The sign for ball looks like the
shape of a ball. The sign for Scotland reflects this
sort of typical plaid scarf that is associated with
Scotsman. And then sign for the mind or the
brain you're pointing to the mind or the brain. Now because form is very often not
independent of the meaning does this lead to sort of fundamental differences in
the way that meaning a form are represented compared to spoken languages. Okay, so one question is are semantics and form really kind of conflated or the same in sign languages because of this iconicity. And if that's the case then there's an
interesting prediction. Which is that signers should not
experience what's called or the equivalent of a tip of the tongue
experience. How many of you have heard of or know what a tip-of-the-tongue, or have heard of that? Yeah pretty much everybody, so it's the idea
that you know the word you want, you retrieve the semantics, but you can't get at the form of the word. Okay so we'll see if we
can kinda get a phonological experience at least from some of you will see if we can get you to experience a tip-of-the-tongue. What I'm gonna do is I'm gonna show you a picture with a definition and you have to come up with the word. Now if you know what it is don't say it keep it yourselves just in case your
neighbor is a T-O-T. Okay because your neighbor can't quite get
what the word is and if you say it then they'll get it. So lets see if we can induce a T-O-T
experience. Know what that is? Anybody? We'll relieve the pressure, periscope! So often you'll know maybe the beginning of the word you feel like it's got more than
one syllable, but you won't be able to get the
actual form of the word. Even though you know exactly what it is.
It turns out that these T-O-T experiences are much more
common with proper nouns with names of people and places, so I'm gonna try one more time again
I'm gonna show you a picture of a famous person if you know who it is don't say it and if you don't see if you are in a T-O-T kind of state. Often you'll know something
all about the person you'll know that she was nominated for an Academy
Award she didn't win this year but she won earlier for "As Good As It Gets" she was in
the famous TV show. Helen Hunt is who this is. So again what for spoken languages these data from tip of
the tongue experiences have been used to show that in spoken language production
there's a separation there's a two-stage process in retrieving a word that you
first can retrieve the semantics the meaning and then you retrieve the form
so T-O-T's show that by retrieving one part but not the other
part, not the form. So our question was do signers
experience what we call the tip-of-the finger state and parallel to tip-of-the-tongue. And the question is interesting because
if you have this conflation between semantics and form, you shouldn't have a tip of the tongue because once you get the meaning you should get the form because they're there so intimately entwined. And so we
conducted two studies, one was simply a diary study, we just had signers keep track. Did they ever have
this feeling of knowing this feeling of "oh I know the sign that I want but I can't retrieve it!" Did they ever
have that experience and they kept a diary for about a year. We also did a more experimental study where we tried to elicit these tip-of-the-finger states parallel
to the way experimenters have done it with
spoken languages where we a little bit like what I did here
with you guys show you a picture or a definition and you
have to give me that the the word. So in this case we had a
translation task where they were given proper names in English and had to give me the ASL
proper sign. So we could sort of see if we could probe tip-of- the-fingers. Okay, so first for the diary study we found that all signers reported
this feeling so signers did experience this tip of the feeling. They knew the sign
but they were able to retrieve what the form of the sign was.
Interestingly enough they occurred at about the same rate that we see T-O-T's for
spoken languages. Now if there's this really this conflation of meaning and form they should be much rarer in sign languages but they occurred at about the same rate, about once a week people would have this kind of
feeling. One of the things we know about tip-of-the-tongue experiences is that often you'll get some
information right and the most common type of information that
speakers will get is the beginning. You know it starts with a B or you
know what the first syllable is or something like
that. And what that's telling us is that in speech production there's
something very salient or accessible about
the onsets of words. So our question was what about sign
languages, do we find the same partial retrieval of form information. Okay, so this is where the elicitation study helped because in the diary study it was sort of difficult to write down what you
knew about the sign you were getting but in the elicitation study
when we presented someone with a proper noun to give us the
sign and they indicated they were in
this very frustrating tip of the finger state we could say "do
you know what the hand shape is?" "Do you know what the location is?" "Do you
know what the movement of the sign is?" So we could find out what
parts of the sign if any, could they retrieve. And it turns out that signers did
report partial information and this is one example where
she was trying to produce...they were trying to recall the sign for Scotland. And what the signer did
was something like this Okay, so they knew the hand shape, they knew this movement, but they
couldn't get the location on the shoulder. That was what took them a while to get. And when we measured what aspects of sign were retrieved, this group of features: the location, the
orientation, the hand shape. Were all retrieved about equally. I actually
had my money on hand shape, I thought hand shape would be something
that they were able to recall in part because sign language dictionaries are
organized by hand shape. Theres something cognitively salient about hand shape. But hand shape is not, the onset is not by itself the beginning of a sign. It's this bundle: location, orientation,
these are perceived roughly simultaneously when you're understanding a sign. And so you can think of them as the
onset of the sign. What unfolds overtime is the movement. And that was the feature that
was least recalled in these T-O-F experiences. And so what that tells us is that
parallel to spoken language the retrieval process is very similar
that there's something very salient about the onsets of either words or signs. And critically it wasn't the
case that iconic features were more retrieved.
So we analyzed whether the hand shape or the location
or the movement was particularly iconic in a sign. That didn't predict what was going to be
retrieved. So what does this tell us? Basically
its evidence for sign language phonology That there is a level of form that is
separate from semantics. And that this isn't really this
fundamental distinction isn't affected by the fact that signs often are iconic. And it supports a lot of linguistic research which I think is a really
fundamental discovery. That all human languages develop this
level of structure that you can call phonology that is separate
from meaning. For spoken languages these structures are based on vocal features. So where the tongue is, whether the sound is voice, the vocal
articulators, but for sign language you have similar structures but it's
based on manual features. So hand configurations, locations and
movements. But linguists are discovering that the
constraints, the nature of these forms are very parallel between the two. We come back to my question: do all human languages represent meaning and form independently? And the answer is yes. So now let me go to the next question: does the brain distinguish between
language and pantomime when they look the same? Before I get to this question now, I
think it's important to ask another question because we need to know
something just about the basic processing for sign language. So do we see basic parallels in brain structures that process spoken
language in sign language? So are those same areas key regions involved. And so I wanna go through that a little bit. And I'm gonna talk about 2 very
famous regions that are known to be critical for spoken
language processing and thats Broca's region/Broca's area we know it does a lot of things but is
also its known for a long time to be key in spoken language production. Okay, Wernicke's area or the posterior superior temporal
cortex is known to be involved and
comprehension of spoken language. And interestingly enough, Broca's area is
just in front of just anterior to the
motor cortex that controls the vocal articulators it's the lips, the
tongue, so it kind of makes sense that you would have a region involved in speech
production near the articulators. The sensory motor control of the speech articulators. Wernicke's area is just behind the auditory cortex. Which makes sense
that an auditory comprehension system or region would be
near auditory cortex. But now this raises the
question what about sign languages right? Because sign languages use the hands as the
primary articulators. The hand representation sensory motor
representation of the hands is much farther up on the motor cortex not right next to Broca's area and of
course sign languages are perceived visually. Visual cortex is in the back of
the brain. So does this difference in that
input output system sort of fundamentally kinda reorganize the language
systems within the left hemisphere. Here's the answer. When we first look at output, so sign and word production, I'm showing you data from a study that
we did that was sort of a meta-analysis looking at studies of sign production and word production and
doing what's called a conjunction analysis to see what regions are equally active for both sign
in word production. And these were picture naming tasks where people would see picture and have to
produce the word or the sign. And what we see is that in Broca's
area equally active for both sign and speech
production. And this fits with
long-standing work looking at sign aphasia, suggesting that you have production problems if you have damage
to this area. These data also indicate that really
Broca's area is really not a speech area. So despite the fact that it's
right next to the speech motor articulators, and despite the fact that there's really
strong connections between Broca's area and auditory cortex and nonetheless is involved in the
production of a visual manual language. So this is a language region not a speech region. Now while I'm on this slide I just wanna point
out how one other region of activation that was active for
both sign and speech in this is left inferior temporal cortex. And I
mention this because this is a picture these were based on picture naming
studies. And we're gonna see this later. This
is a region in a visual stream that's involved in object recognition activation here is left lateralized and
the idea is that this particular region mediates
between object recognition and lexical retrieval that is finding
the word that you want to label that picture. And that area is also active for both
speech and sign language. Okay now what about language perception. Again we find that this Wernike's area, this posterior superior temporal cortex is active for comprehending sign language. Of course sign language is presented visually and yet we're seeing activation in Wernicke's area in this posterior STS region. Again this is telling us that this
region is not a speech region it's not tied to auditory speech
processing. And I'll point out 1 other thing, these two studies our study here and this study by Petitto actually
presented what are called pseudo signs or nonsense
signs. They are analogous to nonsense words like garn or blick. So this activation for those
studies wasn't so much comprehending like lexical items because
these forms didn't have meaning but they were linguistically
structured so they were more active for deaf people watching these people who
know the sign language than hearing people who didn't know the sign language. For the hearing people these were just
hand movements. For signers these forms were linguistic
objects even though they didn't have
meaning they were like garn or blick you recognize those as possible English
words the brain recognize those as possible
signs. So you had phonological processing going on in this region that's very close to
auditory cortex. The other thing that's worth
mentioning with respect to this activation in these sort of auditory
regions is we've looked at, we've done structural brain studies of deaf individuals who
were born deaf to look at what happens to their
auditory cortex. Do we see differences between auditory cortex for people
who are born deaf and hearing people. And surprisingly it turns out that
auditory cortex does not atrophy and die in deaf individuals. We looked at two
regions one is Heschl's gyrus, so this is primary
auditory cortex, so the first place in the brain that sound reaches to be
processed. The size of Heschl's gyrus was not
different it wasn't smaller for deaf individuals, it was the same size for deaf
and hearing individuals. And we looked at what's called the Planum temporale
which sometimes is considered to be overlapping with
Werenicke's area. We also didn't find any difference in
the size of the Planum temporale for deaf compared to hearing
people. In addition, for both of these
structures they were bigger in the left hemisphere, the language hemisphere, than
in the right hemisphere. So what this is telling us is the reason
we're finding bigger Planum and Heschl's gyrus in hearing
people wasn't because they were processing
speech or because it was something to do with auditory cortex we find it in deaf
people as well. So we still don't know exactly what's
underlying this asymmetry could be language processing could be something
else. But it's not related to hearing. This data also fits really nicely
with what I just showed you in terms of brain activation that these auditory regions are activated by visual
information for deaf signers. Builds sign language and
other kinds of visual stimuli activate auditory cortex in these
individuals. Okay so let me come back now to my sub-question. Are the key brain regions critical for sign language as well as
spoken language? Here we find the answer is yes. Now let me ask the
question about language and pantomime. And of course
this question arises because unlike words, signs can look like actions. And so how does the brain
tell the difference? Here's examples of what the signs
look like. So these are often called handling verbs because for the kinds
of verbs I'm gonna be looking at is how you would holder or use a particular object, how you'd
handle it. So here's the ASL sign for scrub and the ASL sign for drink. You can see these sort of show how you
hold an object and how you use an object. They look very much like if you were to
pantomime those actions that might look very much like
that. But if you think about what's involved in a
pantomime it is determined a lot by the properties of the object. So if you're gonna pantomime
drinking from a straw I you might do something like this. Or if you're gonna pantomime drinking a shot you might do something like this. Mug...you'll do different pantomimes depending on the object you're drinking from. But the ASL sign for drink means consume a liquid it doesn't mean drink
with a mug or drink with a cup or something like that. So you can use this sign in all those different contexts
just to mean that liquid was consumed. And the way
that is represented in a signers brain at least that's what we have hypothesized is that just like the English word form drink is stored in your
lexicon want to use that can pull up what those sounds are, for a signer you have the phonological
form of the sign that means consume liquid that you pull up and produce when you're producing the sign. And so we wanted to know what neural
regions underlie these two different activities. How are they similar? How
are they different? How they dissociate? Do they dissociate?
Does the brain make a difference? So I've told you about the brain areas that are
involved in language production but I need to tell you something about
the brain areas that are for pantomime production. So these are just a couple studies of hearing people who were asked to pantomime how you would
use different tools. And they looked at what brain areas were involved the compared it to complex
finger movement tasks. And what you find is in particular left
superior parietal cortex or SPL (superior parietal lobule) is engaged in
pantomime production. So this is a region at the top of the brain. And you don't find language areas
involved in pantomime production so you don't see Broca's area engaged for example when you're producing
a pantomime. So our hypotheses given what we know about
language in either no representation for language
for can't my was it if the sort of can't mimic signs the signs like drink
or hammer or brush your hair I miss it had like
pantomime look for me i cant I'm arm then we
should see on greater activation superior product a lob you came I million if the
pantomime like the production words we should see activation brokers
area the area that's involved in retrieving
lexical items cancer to get the idea with so I we conducted arm a PET study for PET scan for positron emission
tomography most if you probably are more familiar with fMRI functional magnetic resonance
imaging I'm pack some they both measure brain function they both the time
essentially measure blood flow within the brain so when a
brain areas active blood flows to that area it did a little bit different ways the
reason we use pad is it's much more for giving a movement
were really interested in fine production I'm and so this allows
us to have signers in the scanner and actually really find if anybody's had an MRI you know that
you're told do not move Lee is still as possible here you can move a little bit more arm
the deaf individuals that we studied were all native signers so this means that they were born into
de families they acquired itself from birth this is
important because I'm often for comparing want to compare
apples in Apple's so hearing people are exposed to
either spoken language from birth we're gonna compare deaf individuals who compared to sign
language from birth army asked our participants to do two
things and we've got for the task to either generate ever given a picture
or can't so in the first generation task of course this is only deaf people can
they were just interested in signing production sign for production so they
were given a picture of a particular object and they were asked just produce verb
that goes with that object I'm in the PS one condition they were
shown against a more pictures object but now they were ask generally pantomime show me how you
would use this object and we had both hearing people
and deaf people perform that task and then it all these imaging tasks you
you always have had a baseline task that allows you to serve measure active to against which to measure activation
okay I'm in our task was just indicate
whether on pictured item can be held or not at all
she without got to I select just start with her generation
task we actually had to arm two types have pictures um one that would in protesting with with that these iconic handling birds and another set of
pictures that would elicit birds that didn't have
these captain qualities so we could directly compare verbs that were like pantomime Sanford's
that want I'm in here some some examples were here
would be very upset with the recipes handling tight iconic for arm so this is a few your show a picture
of a pen you might produce or people with most
likely produce the the verb right or send a picture for him to produce
assigned to him I'm Antti we norms these both with our and a group of
deaf people to make sure that we've serve consistently listed these
birds we also had I'm hearing people judge whether these were really iconic
producer guess the meaning and they could guess
that means these really did look very much like can't mine now when are not can't mimic I'm cases you were given an object but the
report was produced didn't have this kind of sensory-motor Aiken SC so here's the verb to measure
NASL this is my personal favorite the pour syrup or poor salad dressing is to
be the first to see it with your Sonata handling for I'm and again i hearing people couldn't
guess these but the meetings at these signs were now
for the appeared my generation task now they
were told show me what you would do this object arm and we showed hearing people also
those ver those pictures that we asked signers to
generate birds to set the picture of the pen and hammer I'm we asked I'm hearing
people to generate pantomimes too so we could compare for
the same object a verb verses a pantomime I so here pictures I'm that we ask both I'm deaf people and
hearing people to generate pantomime cue I'm and we also make sure that the
picket lines that were produced I'm didn't look like the burbs that
would be associated with it so the signers couldn't find it cheat and
produce ever had to produce account so here's a
pantomime for I'm just a sweeping now the SL sighing
for sweet looks like this here's a Kathmandu spring eating with a fork now the for for each just looks like
this right but signers fast paint my pretties something like
that now our baseline task so here again use of pictures just like
been saying all along and that you were asked to just indicate
can you hold that object or not so if you could you
would do this if you couldn't you do this and that the non-negotiable
once the houses were just relatively rare andy is this this baseline was one we
can subtract out activation it was just do to seeing an object Manitoba logic we
can I'm active we could subtract an
activation it was just partly to just move your hands and we can subtract out activation that
was due to just thinking about the mandibular ability without actually generating ever
or a pantomime okay so here's what we found: this is activation for deaf people
producing pantomimes and what you can see is what
we would expect I left superior prime location exactly bilateral and pretty extensive
for deaf I'm signers strictly when you look at
the hearing signers who also produced activation superior parietal cortex was more left
lateralized what's going on why we see this
difference just in pantomime production between the deaf and hearing I'm folks is the the the deficit just
better at it so they produced more complex pantomimes and there's some more but more complex
so for example they might I use two hands so in seeing a spoon here in person
might just do kinda lacks little sister like this a deaf
person with show you that copper the thing and stir
and tended to repeated as well I was hearing people didn't I'm
they also their him chips or just much crisper I'm in the hand shapes on that
were produced by that hearing participants but the key is now
what happens when they're producing these service that look like cantons and
you will find a very different pattern right here we find is activation in left inferior frontal cortex and brokers
are extending into brokers area we don't see activation more activation in superior I'll quite it it's engaging language region what
about the hearing guys producing these separatism is gestures now we we don't
see brokers activation for the hearing
people who are producing we see this again the superior pride all
activation and looks just like the activation may on I showed you the previous line
cocaine so the brain is treating these forms as for for the deaf individuals engaging
link to read know what about I if we think about these two types it for
I'm handling birds and then on handling birds mom if human bird somehow I'm engage pan pantomime regions we should see more
activations in a superior product region for those birds then for the non
administered but in fact we find is no difference so
this is a color palette activation so redwood in
if we found red areas it would be regions that were
more active for these handling birds if we saw a purple ones would be more
for these firms but where you see the lot agree me which means no difference in activation
between those two so the brain doesn't care that one is
iconic and the other isn't you're just getting language regions
engaged so becomes my question does the brain
distinguish between past my main language when look the same yes that what we see is it signs engage
this left inferior frontal cortex which we know is engaged areas involved in lexical search retrieval prostheses for
language production pantomimes on the other hand in gage
bilateral superior pride UL cortex involved in motor planning arms
control on now I me a little careful here
because I don't the claim isn't that on the neural
systems for signing pantomime production are completely distinct and
non-overlapping know their there's these have overlapping /url
circuit so there are cases where I'm sign can engage a secure pride all
cortex and hit my main gate in fear from but
you see but they're engaged differentially I
with different patterns have activation okay regarding the last question and again
this is arising out I'm all these questions or to stem from
the fact that you have a different biology for sign language I'm forearm in this seems to have a particular
impact on how signing with encode information about space up talking about
the locations optics so let me tell you what what I mean if
you think about this seen think about how you text you would
describe it in English if cup on the table or fun if you know another language about how
you like between cuz this and it turns out that most I'm spoken
languages use I'm these sort of functional elements
are close class words grammatical words called propositions a locket affixes that encode the special
relationship so on an English school in Italian and in
Spanish in those you know the lake which is good
think about what hi what were used to encode that for something just a little
difference what happens is for finding which is if
the location up the handset that indicate up the
location okay so I can indicate on I can indicate under couldn't make it on
the edge of the table I can't get next to you floating above okay I'm it's where I
place my hands in space that's telling you where the object is
not a particular morphine at least that's what our
argument is particular meaning unit like proposition now the way arm that family which is due to say also
that the they have to protect pick particular he and shapes our
handshake morphemes that indicate the type of object so
curved objects I am or I longfin object okay I'm and these are called classifier morphemes because I'm there somewhat
parallel to what's found in Spokane some spoken languages where there's a
particular morphine that indicates the typeof object so this is an example
from do you gain you where this on I ball right here
indicates that the cutting is being done with a long in with the
long optic like with the night so this verbal mean to cut with a knife and if you were to put a different
morpheme their up to it would mean to cut with us curved
object like a sky or scissors I'm and so both Hinchey pennies these um classifier morgan's indicate
I'm the type of object that's participating in the predicate but sayin
in the for I'm wanna be a little careful because me
smiling linguistic scholar exist there's
definitely controversy about whether you should really analyze these as
classifier constructions are not but for my purposes the main idea is to
sort of see the parallel on that there are specific and shapes
that you have to used to represent objects in the spatial relationships and
its two parallel to what's going on in some spoken languages with having more things that indicate
object-type so our question now is what are the real
consequences have the special language system that's
found in sign language that I should I mention that I know of no sign
language that works like a spoken language with
respect to special language that is no sign languages that uses propositions Urlacher affixes as the primary wave describing special
locations giving so many directions telling someone in kitchen is gonna look like in
on Sanders are not used I'm invented sign languages where you transfer a spoken language with sign
language will have is kinda propositions but naturally emerging languages that
emerge from community is if users do not use that type of system they use from
space to indicate spatial relationships so um what is the brain do so again we are
conducted a PET study we asked participants to do different class where we could focus on expression
of space for the expression have on objects so in one case we asked participants to produce a
classifier form I'm indicating where an object with me a
copy didn't change so this would be you know
o'clock in different positions with respect to a table and so you can see how the sign it would
indicate these locations he basically just do a mapping from what you're seeing to finding space and
hypothesis is that these are not meaning units in
the same way that propositions are in terms of the
retrieved from a stored set in the lexicon we contrasted that with we asked
participants to produce arm the class Farhan shit forgiven on
check okay so now objects change but the location doesn't change to you
really focusing on what's the rights can shape to I'm indicate that object came so in case you have a to general object classifier arm long thing possible okay now you really
focusing on the object and then we compared activation when
partisans were doing a location expression for sista opted special task okay so when we finally look bad the
brain areas that are really engage expressing the location per se what we
see is activation bilaterally in superior product cortex so this is a
lot more posts to your the activation that we were looking at for the
pantomime production I'm and its regions within superior
product cortex that we know are involved in a number visual spatial processes so special
attention I am visual motor transformation what
that means is basically taking visual information in and by translating into serve a body
centered representation so that you can move your
hand towards a particular location in space or towards a
particular object on and is also we know the parietal
cortex and is involved in motor control a hands in space so our idea is that what's going on is
these regions are engaged because you have to produce a more
gradient representation I've where these locations are that's
going to engage a superior product cortex now this is really quite different from
what's found when we ask people in to do these similar tasks and spoken
languages where you have to produce a proposition
so in this case these are date this is Bo
comprehension and for data from comprehension production pass to the
production tasks where people are asked on just name the
special relationship so a bad thing so in are beside the next to
you on on this is a comprehension task I'm the region is not secure protocol
attacks what its farther down it's in the inferior
parietal regions and it's a region called the super marginal child an other hypothesis here is this this region here sorry in that but
sometimes called the aware pathway some looking where objects
are if the left lateral eyes you only see
activation away which hemisphere the idea here is that this is involved
in retrieving sorta categorical representation: or above the semantic the space so in on arsov categories a special Asian
ships I'm and that's what's being activated in
that region is this mapping between a scene and a
categorical representation: %uh the location that that maps onto the linguistic
structure I'm a proposition or a lot about X for example that's very
different than the neural computation it has to be done for sign language where you are don't
have a categorical representation but a much more gradient
representation were the exact location other hand in
space is critical okay so arm what about the other thing
what about now we're not looking at location but we're looking at the
objects mom now we find language region engaged so we find one is this region I was
telling about before is inferior temporal cortex region on and this is the reason that we know is
engaged in object analysis recognizing objects ice left lateralized so it mediates
between object recognition and retrieval have the correct I'm handshippy Craig classifier more thing we also see broken Syria
involved again let's go search and retrieval you
have to pull out the right hand shape I'm an idea is that these are actually
these can shapes are stored in the lexicon and have to be
retrieved unlike locations or movements if you talk about moving
through space can only ask signers I'm not sure you to
stay there but we ask signers to name these objects so I am us assign layout or hammer these exact same regions are engaged so
brokers and if your temple cortex so between
your classes are held it is very much like retrieving the lexical fine so these are
stored in the last time ok because I don't have my cell already
told you that part alright so I'm if we come back to the question does the biology of
linguistic expression impact the friend basis for special in which the
answer is yes so what does this mean means that we've got I'm a very
interesting fact that biology on in your publications that have to be
done for expressing special language so for signing which is you have to do
this mapping between either your com the senior
describing or mental image into this body Senate representation
where the objects now by two hands locations
there by two locations and you can have my gradient representations what we argue is that unlike are the Han changing the
classifier more things that you retrieve from these locations and movements are
not stored as morphemes they're sorta produced on the fly as
you're describing these relationships and so they're quite
different from propositions or pocket affixes and you can use
different brain systems for their production in particular what you see is bilateral
SPL superior product nation now before some I review I wanna make
sure that on because it I'm often you may think well special in
which is really easy and fun thanks its are you do is put your hands
in space and just match things up right I have your are itself students got a few US often okay when you hit
classifiers rate not easy to learn on n it's because there's a lot to constrain on arm how these I'm different hand you to put
together and on how you interpret these relationships so
and I'm I'm just gonna give you a few of these constraints you can get a flavor for
what have to go into the grammar what you have to learn to be able to produce
these so here's a sample I'm I've indicating
the boy in the car now I can sign up some it
looks like this King this is the vehicle classifier this
is sort of a seated person if I do it like this it means the boy is
sitting next to the car came if I sign it like this with the arc motion they never really
told you that the boy got into the car so even though the end my hand is in the
same position you have to understand and not as a next
year relationship but as an in a relationship so it's not always
completely obvious that just where the hands are indicates
the spatial representation there's constraints on which abuse
classifier handshake you can use and they're not at all obvious arm so indicating a person standing on a
surface like the hood of the car on this is a class for that means up
right person you can unit for someone walking through the
woods or something like that you cannot this is weird K even though it's perfectly make sense
surface a bright person on it you have to use a what's called the lake
classifier this classifier to describe that picture it's something
you have to weren't I'm there's constraints on what are
called market yes so com this is upright thing that Japan okay I should tell you that star in with
technology means bad or not good I'm so for this first one arm in also just make sure you can see
the spencer either all upside down or they're all pointing
upwards having now if I do this K I'm this is that I'm art form it doesn't necessarily mean the Pens are
upside down upright I can Xing either one but if I did this K that smart that have to me the Pens are pointing upwards because
this then is that pop up the pen me not an obvious thing that you would know
you have to learn this okay and this is just a handful of these
kinds of constraints so it's it's a very complex system kids take a long time to learn it
I've even talked about perspective I'm but I wanted to I'm and we were
particularly interested in just looking at okay let's start with the basics very simple and see um what mom differences we see between spoken in
sign language is and what brain regions need to be involved to just do the simple part is placing
your hands in space this requires a lot more work to see what is I'm parsed out for
these very complex active construction okay cell what do I want you to take home with you
today one other things is that Palm I've shown
you too reasons why I can is the this fact that
signs of them look like what they mean doesn't really alter the fundamental
organization of human language or the brain disease so evidence from chipper that finger suggested that
despite fact they've got lotsa for meeting
overlap their can be retrieved independently in two
stages on dinner imaging results comparing so
signs that look like pantomime since i sat down the brain doesn't care whether its
iconic or not you get the same left hemisphere language related regions
engage in their production doesn't matter whether they have this
mapping between for me farm but that's sad arm there's a lot I
really interesting work going on right now to explore well it doesn't change the fundamental
nature brain organization our language but do you see an interesting role for
accuracy for example in apposition or processing
I'm as a people are starting to play a little bit I we know how it goes very interesting
constraints and metaphor on that have been discovered that are
based on whether signs are iconic are not so there's me to see things a look
at what I can a city I'm but it doesn't really alter the
basic I'm structure %uh language or or that
neural systems we do see differences with respect to
spatial language so there we really do see a different
system both linguistically I'm with respect to this use assigning
space it comes from the biology it's very easy to show her my hands are
I'm you can't point with your time you can't
see the articulate urs so it's not a national credit system that emerges for
spoken languages I'm and we see a reflection that in the
brain bases for special language differences between science building
which so we thank you I think it's really really important to something the people
who participate in our studies because without their volunteering with the research simply isn't possible
so it's really important to say a big thank you to all the people who
volunteer their time I'm to participate in our studies would
have to take my funding agencies arm and if you think a colleague that
particular mom from UCSD I where USC I'm University Washington I'm who
collaborated on the project that I talked about I'm today know before moving to questions I i
couldnt resists at least letting you know some of the
other things that are going on in in the lab to see his their students for
interest in this part is a college might be interested so one when and domain I haven't talked
at all about today but that steve mentioned arm was this notion have of bimodal
bilingualism what does that mean basically it's
bilinguals whose languages aren't you just went out is a
visual manual modality and party to revoke a mentality we contrast that with we sometimes call
you tomorrow bilinguals it is most studies a filing will have
been done on people who know to spoken languages your question is how how is this
different nature bilingualism change things power bilinguals different from modeling both I'm and we've in his neck
special up being a bimodal bilingual so for example we've already shown
previously that um bilinguals in a sign language have
certain enhanced face processing abilities they have certain superior abilities and
certain mental imagery abilities I'm we also see
parallels between I by modeling you tomorrow bilingual so palm one of the things that special over
and over again is that if you're bilingual you can't
really suppress your other language its always on anybody who out there is
bilingual you might serve have this sense right its if always kinda on their it's
turning out that filing will still look like mike Mullen
was there other way in which concert subtly impact I'm language processing up the language that they're not
actually speaking we find a really cool fact I love this
for by what about Lingle's and this is in their co speech gesture so those are you guys who you are
signing with signers are taking sign language is class into the surf think
about if you have experience this your cell once you learn a finding which and now
you're interacting with somebody who doesn't something we should all you
can't just a little bit more right and also surprisingly sometimes
you produce and sell fine of course the present know the SL at all but it's just its
kind let's play which is there and it just comes out in this way that we don't see for spoken
language because you can just sort of produce the spanish word to some use to
Spanish this has been a breakdown but you can for signs we see this week
interesting influence I am the other um area that we're
starting to really look at now is is reading in the Deaf bring so we know for for hearing readers it turns out bad
arm phonological Billys oath be able to map
sound to print is really critical to language acquisition and literacy on but it's turning out bad on this full article decoding or find a
call where disabilities is actually not that predictive how well
a definite vigil read and so what we're doing is
exploring somewhat alternative past two litters
see there might be for deaf individuals by um for sort of mapping the neural
circuits for very skilled def readers I'm who may not have great
for what awareness skills that are really a college-level whether they're do they does the brain
arrive at the same neural solution that that I'm hearing
people do I'm and we're also interested in looking at
the role that finger spelling my play I'm in supporting reading acquisition
com and in reading in general on NL artist and by acknowledging my
fabulously you we have such a terrific group a
student's and researchers and postdocs in this lab I'm really I'm really blessed to to work with
fantastic good the people so I have to and by
taking them and then I'll move to taking your
questions we're going to have a reception
afterwards unions and 20 minutes I last nested so a couple things before we finished
obviously a I I was missing my kids I started knowledge that parents up these the others I mentioned is that you can't honestly thinks is
important is within by their hands your house yeah so yeah like sausages should is that his dick
takes great pride just saw lamar when that only has
meaning scholar rice's same list and with the
missile at with here's there yes happen selected
care what they were saying is that she's
representing me stoller be monitored represent us University with a sledge us it's with that like to
present her with as you can I thank you yeah lodge