Hi! This week we will focus our discussion on the next sensory system, which is the auditory system that's responsible for our sense of hearing. We will begin this video with a brief overview of the stimulus energy that activates our auditory receptors, namely sound.
Sound is a psychological phenomenon in the form of air vibrations heard by a species or organism. The range of audible sounds varies by species. According to Hudspeth in the year 2014, under optimal conditions, human hearing is sensitive to sounds that vibrate the eardrum by less than one-tenth of an atom's diameter and is able to detect a difference between two sounds as little as one-thirtieth of the interval between two piano notes.
Humans use hearing to obtain useful information. For example, hearing footsteps in your own home. or hearing a snapped twig in the forest tells you that you're not alone. If you hear breathing, you know some person or animal is close by.
And when you hear a familiar friendly voice, you know everything is fine. Sound waves are defined as periodic compressions of air, water, or other media that are processed by our auditory system. For example, when a tree falls, the tree and ground vibrate.
resulting in sound waves that activate receptors in the ears. Or when you're at a concert you can feel the air around you vibrate as a result of the sounds coming from the musical instruments. Like light waves, sound waves also have three properties.
The first is the sound waves amplitude which indicates the intensity or the loudness of a sound and is usually measured in units of decibels. The amplitude of the wave corresponds to the amount of pressure the sound wave produces relative to a standard. The typical standard, which is 0 decibel, is considered the lowest intensity of sound that is audible by humans.
Generally, the higher the amplitude, or the higher the decibel, the louder we will perceive the sound, because the air is pressing more forcibly against your ears during loud sounds than during quiet sounds. There is an exception to this, however. The sound of someone speaking rapidly seems louder than slow music played at the same physical amplitude.
The second property of a sound wave is its frequency, which is measured in hertz. Frequency is defined as the number of air compressions per second, which indicates the pitch of a sound. Higher frequency sounds are perceived as higher pitched sounds, like the sound of a flute or a soprano.
female voice, while lower frequency sounds are interpreted as lower pitched sounds, like a contrabass or a male bass type voice. Human hearing is limited to a certain range of frequencies. Adult humans can typically detect sounds between 15 and 20,000 hertz, although children can generally hear even higher frequencies. This is because the ability to perceive high frequency sounds.
decreases with age and with exposure to loud noises. Larger animals like elephants can hear lower pitches while small animals like mice can hear pitches higher than those audible by humans. The third property of sound is timbre which is determined by the complexity of the sound wave.
Most of the sounds we hear like speech sounds and sounds from musical instruments are complex sounds comprised of many frequencies and amplitudes of sound that are simultaneously combined. Timbre is a dimension of sound that allows us to perceive the qualities of different sounds. Timbre allows us to hear differences in the tone qualities of the voices of two people singing the same song, as well as the differences in two musical instruments playing the exact same note. For example, a musical instrument playing a note at 256 hertz will simultaneously produce sounds at several different tones, which is commonly referred to as the harmonics of the principal note. These harmonics will vary for different musical instruments, resulting in the different sound qualities that we associate with different musical instruments.
A person can also vary their voice's pitch, loudness, and timbre to communicate different emotions. For example, You can vary the way you say that's interesting to indicate either approval, that's interesting, in that you really find something interesting, or sarcasm. That's interesting, when you really mean to say that it's actually boring. Or suspicion, when you think that someone is hinting at something. For example, that's interesting.
Conveying emotional information through alterations in voice is known as prosody. Here you can see the different physical and perceptual dimensions of sound waves. Higher amplitude waves are perceived as louder.
and lower amplitude waves are heard as softer. Fewer sound waves within a time interval, or low frequency sounds, are perceived as low pitched, while fewer sound waves within a time period, or higher frequency sounds, are perceived as a high pitched sound. In the bottom part, you can also see how the complexity of a sound wave determines your perception of a sound's quality. Next, let's focus on the structure of the ear. The ear can generally be divided into three parts, namely the outer ear, middle ear, and inner ear.
Let's start first with the outer ear, which consists of the pinna and the external auditory canal. The pinna is the outermost portion of the ear that's made up of flesh and cartilage attached to each side of the head. The pinna helps us locate the source of a sound by altering how sound waves are reflected.
The pinna of some animals, like cats and rabbits, are movable and therefore allow the animals to localize sounds with greater accuracy than humans. The pinna acts as a funnel that collects sound waves before the auditory information is passed through the external auditory canal and arrives at the middle ear. The middle ear is a product of evolution.
which was needed to accommodate the needs of land animals to hear on land. Early in evolutionary history, animals that live in water, like fish, developed simple hearing receptors because sound travels in water differently than it does on land. Early land animals, however, could only hear low-frequency sounds that could vibrate the whole head. To compensate for this, land animals evolved the structures of the middle ear and inner ear to be able to hear on land.
The middle ear is comprised of the eardrum or tympanic membrane and three middle ear bones. When sound waves reach the middle ear, they cause vibrations of the tympanic membrane, which then cause vibrations of the three middle ear bones, namely the malleus or hammer, the incus or anvil, and the stapes or stirrup. The vibrations are amplified by these three bones, eventually producing greater pressure on the oval window, which is a membrane of the inner ear that is attached to the end of the stirrup.
When the stirrup vibrates the oval window, vibrations of the oval window cause movement of the fluid in the cochlea, which is an inner ear structure shaped like a snail. Within the fluid-filled cochlea, are auditory receptor cells known as hair cells, which line the basilar membrane. And on the other side of the hair cells is the jelly-like tectorial membrane. Movement of the cochlear fluid causes displacement of the auditory hair cells against the tectorial membrane, which results in the stimulation of the auditory nerve, which is part of the eighth cranial nerve, ultimately causing impulses that are interpreted by the brain as sound. The auditory hair cells are delicate, such that exposure to extremely loud noises can damage them, leading to loss or difficulties in hearing.
Our ability to understand speech or enjoy music depends on our ability to distinguish sounds of different frequencies. How exactly do we perceive different pitches or different frequencies of sound? There are at least three theories of pitch perception, namely the place theory, the frequency theory and the volley principle according to the place theory each region along the basilar membrane has its own sensitivity to a certain frequency of sound and reacts to that particular frequency with vibrations the basilar membrane resembles a piano string with each area along the membrane tuned to a specific frequency this means that each frequency of sound can only cause a response in one particular area along the basilar membrane.
The task of the brain is to determine what frequency is being heard based on the location of the neurons that are activated by a particular frequency of sound. There is a downfall to this theory, however, in that the various parts of the basilar membrane are so tightly bound to one another that it would be extremely difficult for an individual area of the basilar membrane to resonate like a piano string without also activating neighboring areas. In 1960, George von Baekecy discovered that stimulation of the oval window causes a traveling wave in the basilar membrane similar to the ripples that appear when we throw a stone into the water. But because the cochlea is a long tubular structure, it follows that this ripple can only travel in one direction, specifically from the base of the basilar membrane to the apex. The place theory maintains that higher frequency sounds cause displacement of the hair cells closer to the base of the basilar membrane while low frequency sounds activate hair cells closer to the apex of the basilar membrane.
The second theory of pitch perception is the frequency theory, which suggests that the entire basilar membrane vibrates in synchrony with the sound wave that stimulates it, causing axons of the auditory nerve to produce an action potential at the same frequency. So for example, a 50 hertz sound will result in 50 action potentials per second in the auditory nerve. Our perception of the frequency of sound then depends on how frequently the auditory nerve responds. High-frequency sounds cause more frequent responses, and low-frequency sounds cause less frequent responses.
The theory, of course, has its own weakness. Remember that a neuron undergoes a refractory period which lasts about one a thousandth of a second, which means that this theory would hold that the maximum frequency of neuronal activation of the auditory nerve is at 1,000 Hz maximum, which is way below the highest sound frequency that we as humans are able to hear. Currently, The more popular theory is a combination of some modifications of place theory and frequency theory.
For low frequency sounds, up to about 100 Hz, the basilar membrane is believed to vibrate in synchrony with the frequency of the sound, and that the auditory nerve produces one action potential for each wave of sound. This is of course consistent with the frequency theory. The frequency of impulses identifies the pitch, while the number of cells that are active correspond to the loudness of the sound.
That is, soft sounds activate fewer neurons, and stronger sounds activate more neurons. For frequencies above 100 Hz, however, it becomes more difficult for any neuron to continue to respond in synchrony with the detected sound wave. Therefore, according to the Volley principle, For detecting sound frequencies between 100 and 4000 Hz, clusters of nerve cells fire neural impulses in rapid succession, ultimately producing a staggered volley, or a cascade of impulses, that allow the auditory nerve as a whole to generate up to 4000 impulses per second.
Although no individual auditory neuron can reach that frequency alone, instead of working individually, Groups of auditory neurons team up and alternate their responses. Nonetheless, this volley mechanism still cannot account for the perception of sound waves above 4000 Hz. As an analogy to how the volley principle works, imagine a squad of soldiers who can only fire their weapons one round before they have to reload their weapons with bullets.
If all soldiers fire at the same time, then the frequency of firing will be limited, and there's no way of speeding up the process because the soldiers would need time to reload their weapons. But if the soldiers coordinate with each other in groups, and the groups fire in turn, then there will be some soldiers who fire their weapons while others are reloading. Naturally, this will result in all soldiers as a whole being able to fire their weapons more frequently.
For hearing frequencies higher than 4000 Hz, we use a mechanism similar to place theory. That is, high frequency sounds will stimulate a particular area of the basilar membrane, as suggested by place theory. However, for low frequency sounds, most of the basilar membrane will vibrate, and it becomes more and more difficult to identify the frequency simply by determining the exact location on the basilar membrane that's activated by the sound.
Auditory information received by the ears is transmitted to the auditory cortex in the brain. Note the image on the right side. Auditory input originating from each ear is sent to the cochlear nucleus on the ipsilateral side.
Auditory information then passes through several subcortical areas before crossing over in the midbrain. Although each hemisphere of the forebrain eventually receives input from both ears, but each hemisphere does get the majority of its auditory input from the contralateral ear. The ultimate destination of the auditory information is the primary auditory cortex, or area A1, located at the superior temporal cortex.
Area A1 is essential for auditory imagery. Kramer, McCray, Green, and Kelly, in the year 2005, demonstrated the role of area A1 in auditory imagery by presenting people with familiar and unfamiliar songs with three or five second silent gaps inserted in parts of the song. People who heard familiar songs reported being able to hear in their heads the melody or lyrics that should have been in the gaps, and they also showed increased activity in area A1 during that time. Yet, with unfamiliar songs, they didn't hear anything in their heads, and area A1 was similarly unresponsive during that time.
The organization of the auditory cortex is somewhat similar to the organization of the visual cortex. Just as the visual system has separate pathways for identifying objects and for determining an object's location, auditory information is also sent through a what and a where pathway. The what pathway, which ends at the anterior temporal cortex, is specialized for identifying patterns of sounds, ends in the posterior temporal cortex and the parietal cortex processes the location of sounds.
Meanwhile, the superior temporal cortex detects the movement of an auditory stimulus. If damage to the medial temporal region causes motion blindness, then damage to the superior temporal cortex can result in motion deafness, where an individual can hear a sound but is unable to determine the movement. of the sound source. As with the visual system, normal development of the auditory system depends on experience. Both constant exposure to loud noises and constant silence will impair development of the auditory system.
In constant noise, it would be difficult to identify and learn about individual sounds that exist in the environment. But there are differences between the visual and auditory systems, especially with regard to the effect of damage to the primary cortex. You've learned before that damage to the primary visual cortex can result in cortical blindness.
In contrast, damage to the primary auditory cortex does not result in deafness, but rather in a difficulty recognizing combinations or sequences of sound instead. such as the sound of speech and music. People with damage to the auditory cortex can still identify single sounds, though, suggesting that the primary auditory cortex plays a more important role in processing complex auditory information.
In the primary auditory cortex, cells show a preference for certain tones. That is, cells that tend to respond to certain tones will group together forming what's known as a tonotopic map. Thus, the cortical area with the greatest response provides information about what tone is being heard. For example, in this picture, you can see how this part of the primary auditory cortex, marked by a green box, responds when the highest notes on the piano are heard, while the part indicated by the yellow box responds to really high, squeaky sounds. While everyone has a tonotopic map in their auditory cortex, the configuration of tonotopic maps vary from one person to another.
Surrounding the primary auditory cortex is the secondary auditory cortex and additional areas that respond better to auditory stimuli coming from objects, such as those originating from animal calls, bird songs, machinery noises, music, and human speech. Researchers have also found that there are some cells in the primary auditory cortex that respond more strongly to the pronunciation of certain letters, such as all-vowel sounds or nasal sounds, like the sounds associated with the letters M and N. The auditory cortex is important not only for hearing, but also for thinking about concepts related to hearing.
In a study conducted by Bonner and Grossman in 2012, Participants were asked to look at an arrangement of letters and to press a key to indicate whether each arrangement formed a meaningful word. This task is relatively easy, so most participants were almost always correct. Participants with damage to the auditory cortex were also able to respond appropriately, except when responding to words related to sounds.
Oftentimes, when they see words associated with sounds, such as the word lightning, they will report them as non-words. This finding indicates that if a person cannot imagine a sound, then the words relating to the sound would appear to them to be meaningless. Next, let's look at how we localize sounds. When we hear a loud sound, we naturally would want to know what it is and where it's coming from. Sound localization helps us answer this question.
Although the localization of sound is not as accurate as visual localization, it is still impressive regardless. Owls, for example, can find the location of sounds well enough to catch a mouse in the dark. Determining the direction and distance of a sound requires a comparison of the responses of both ears.
There are at least three methods that we use to localize First is by determining the difference in a sound's time of arrival at the two ears. Second is by comparing the intensity of sound received by the two ears. And third is to determine phase differences between the two ears. The first method is the difference in the time that the sound is received by the two ears. A sound that originates from one side of the body will reach the closer ear around 600 microseconds before it reaches the other ear.
A smaller difference between the two ears in the sound's time of arrival indicates that the source of the sound is closer to the midline of your body. This particular method is useful when we're trying to localize sounds with sudden onset, such as the sound of a dog barking coming from your left side. The sound will arrive at your left ear before it's received by the right ear, which allows you to identify not only that the dog is on your left, but also how far away the dog is from you.
The second method is by determining the difference in the sound's intensity between the two ears. To localize high frequency sounds with wavelengths shorter than the width of the head, the head creates a sound shadow that causes the sound to be perceived louder in the ear that's closer to the source of the sound. In adult humans, this method results in accurate sound localization for frequencies above 2000 to 3000 Hz, but is less accurate for the localization of lower frequency sounds.
Individuals who are unable to see often use sound shadows to adjust themselves to the environment. From the two methods discussed thus far, we know that the difference between the arrival time of the sound at the two ears and the difference in the intensity of the sound between the two ears can help us localize sounds. However, we often find it difficult to localize sounds coming from directly ahead, directly above, and directly below our head because the sound reaches both ears simultaneously such that there's nothing that can provide information about the location of the sound.
An alternative method for localizing sound is by evaluating the phase difference between the two ears. Each sound wave has phases with peaks that are separated by 360 degrees. Sounds originating from the sides of the head will reach the two ears out of phase, while sounds originating from directly in front or directly behind the head will reach the two ears in phase.
How much the sound waves are out of phase depends on the frequency of the sound, the size of the head, and the direction of the sound. The more out of phase the waves are when they reach the two ears, the farther away the sound source is from the midline of the body. Phase differences provide useful information for localizing sounds up to 1500 Hz in humans, which include the typical frequencies of speech and music sounds. To summarize, we localized low frequency sounds based on phase differences, while high frequency sounds are localized by differences in the intensity or loudness of the sound in the two ears.
We can also localize sounds with sudden onset based on the difference in the sound's time of arrival at the two ears. All of the three methods require learning, because as our head grows, the distance between the two ears increases, thus demanding us to readjust our strategies for localizing sound. What about people who suddenly lose hearing from one ear?
For them, at least in the beginning, all sounds will of course be perceived only by the intact ear. That particular ear will hear sounds louder and sooner than the other ear. Eventually though, the individual will learn to reinterpret the intensity of the sound when they hear a familiar sound in a familiar location.
The individual will then be able to deduce that louder sounds come from the well-functioning side of the ear and softer sounds come from the opposite ear. While this strategy doesn't produce the greatest accuracy, it does appear to be useful in certain situations. Next, let's look at several individual differences in pitch perception, which include Tone deafness or amusia and absolute pitch or perfect pitch.
Tone deafness or amusia is an impairment in detecting small changes in sound frequency. People with amusia are not completely tone deaf, but they do have difficulties recognizing melodies, deciding whether a tune is off key, recognizing a wrong note in a melody, and recognizing people's moods from their tone of voice. The auditory cortex of individuals with amusia functions relatively normally, but it has less connections with the frontal cortex, resulting in the individual having poor memory and poor attention to pitch. On the other side of the spectrum are people with absolute pitch or perfect pitch.
People with absolute or perfect pitch are able to hear a note and identify it accurately. For most people who are familiar with music, recognizing the note G when you know what the note C sounds like is easy. But without first hearing C as a point of reference, it becomes more difficult to identify a note as G. People with absolute pitch, however, can accurately identify a note even without a point of reference that they could base their judgment off of. Various factors influence absolute pitch.
including genetic predisposition, early musical training, and speaking a tonal language, which includes such languages as French, Vietnamese, and Mandarin. While not everyone with early musical training develops absolute pitch, almost everyone with absolute pitch had early musical training. In addition to individual differences like amusia and absolute pitch, Hearing loss may also account for differences in people's hearing abilities. One form of hearing loss, known as conductive or middle ear deafness, results from failure of the bones of the middle ear to transmit sound waves to the cochlea. Conductive deafness can be caused by a variety of factors, including diseases, infections, or tumors in the middle ear.
Some forms of conductive deafness may be temporary, while others can be corrected by surgery or hearing aids. People with conductive deafness have normal cochlea and normal auditory nerve and therefore are able to hear their own voices. As a consequence, they often accuse others of talking too softly.
Another form of hearing loss is nerve or inner ear deafness, which is caused by damage to either the cochlea, the hair cells, or the auditory nerve. There are wide variations in the severity of nerve deafness, with some people having nerve deafness that affects only one part of the cochlea and resulting in an impairment to hear certain frequencies only. As with conductive deafness, nerve deafness can be due to a variety of factors, including the ability to hear and hear again.
including hereditary factors, diseases, or even exposure to extremely loud noises. Many people with inner ear deafness experience an additional symptom known as tinnitus, which is characterized by a continuous constant ringing in the ears. Although tinnitus often accompanies nerve deafness, it may also occur without hearing loss.
In some cases, tinnitus may be caused by a phenomenon similar to the phantom limb syndrome because damage to part of the cochlea is sort of like an amputation. If the brain no longer gets its normal input, axons representing other parts of the body may invade part of the brain area that usually responds to sounds, therefore creating the illusion or the perception of sounds when it's actually a different part of the body that is active. Many elderly people experience hearing problems despite the help of hearing aids.
Even when hearing aids help amplify sounds, elderly people often still experience difficulty following a conversation, especially in a noisy room or when listening to a person who is speaking rapidly. This phenomenon occurs because the area of the brain responsible for language comprehension becomes less active in old age which can be due to natural deterioration or prolonged degradation of auditory input. Consequently, if a person delays getting a hearing aid, then the language cortex no longer gets its normal input and becomes less and less responsive to language sounds. Another explanation has to do with attention.
To focus on a specific auditory information, we need to filter out all other irrelevant sounds. Older people, because they lose inhibitory neurotransmitters in the auditory areas of the brain, would find it difficult to suppress irrelevant sounds. In addition, due to the decrease in inhibitory neurotransmitters, the auditory cortex responds to sound in a more gradual and spread-out manner, instead of rapidly and pointedly. Therefore, the response to one sound may overlap with the responses to other sounds. Interestingly, attention to auditory information can be enhanced if the listener pays attention to the speaker's face.
It turns out that we all do a bit more lip-reading than we realize, and focusing our visual attention on the speaker helps lock in our attention to the appropriate voice. Equally interesting is the finding that many elderly people attend to their partner's voice more effectively than other voices.