Hearing

Learning Objectives

By the end of this section, you will be able to:

  • Describe the basic anatomy and function of the auditory system
  • Explain how we encode and perceive pitch
  • Discuss how we localize sound

Our auditory system converts pressure waves into meaningful sounds. This translates into our ability to hear the sounds of nature, to appreciate the beauty of music, and to communicate with one another through spoken language. This section will provide an overview of the basic anatomy and function of the auditory system. It will include a discussion of how the sensory stimulus is translated into neural impulses, where in the brain that information is processed, how we perceive pitch, and how we know where sound is coming from.

Anatomy of the Auditory System

The ear can be separated into multiple sections. The outer ear includes the pinna, which is the visible part of the ear that protrudes from our heads, the auditory canal, and the tympanic membrane, or eardrum. The middle ear contains three tiny bones known as the ossicles, which are named the malleus (or hammer), incus (or anvil), and the stapes (or stirrup). The inner ear contains the semi-circular canals, which are involved in balance and movement (the vestibular sense), and the cochlea. The cochlea is a fluid-filled, snail-shaped structure that contains the sensory receptor cells (hair cells) of the auditory system (Figure 5.19).

An illustration shows sound waves entering the “auditory canal” and traveling to the inner ear. The locations of the “pinna,” “tympanic membrane (eardrum)” are labeled, as well as parts of the inner ear: the “ossicles” and its subparts, the “malleus,” “incus,” and “stapes.” A close-up illustration of the inner ear that shows the locations of the “semicircular canals,” “oval window,” “saccule,” and “cochlea”. A callout window shows the “basilar membrane" and "hair cells.”
Figure 5.19 The ear is divided into outer (pinna and tympanic membrane), middle (the three ossicles: malleus, incus, and stapes), and inner (cochlea and basilar membrane) divisions. (credit: Molly Wells Art)

Sound waves travel along the auditory canal and strike the tympanic membrane, causing it to vibrate. This vibration results in movement of the three ossicles (malleus, incus, stapes). As the ossicles move, the stapes presses into a thin membrane of the cochlea known as the oval window. As the stapes presses into the oval window, the fluid inside the cochlea begins to move, which in turn stimulates hair cells, which are auditory receptor cells of the inner ear embedded in the basilar membrane. The basilar membrane is a thin strip of tissue within the cochlea.

The activation of hair cells is a mechanical process: the stimulation of the hair cell ultimately leads to activation of the cell. As hair cells become activated, they generate neural impulses that travel along the auditory nerve to the brain. Auditory information is shuttled to the inferior colliculus, the medial geniculate nucleus of the thalamus, and finally to the auditory cortex in the temporal lobe of the brain for processing. Like the visual system, there is also evidence suggesting that information about auditory recognition and localization is processed in parallel streams (Rauschecker & Tian, 2000; Renier et al., 2009).

 

TRICKY TOPIC: AUDITORY TRANSDUCTION

If the video above does not load, click here: https://youtu.be/KcWmt-5BuTs
For a full transcript of this video, click here

Pitch Perception

Different frequencies of sound waves are associated with differences in our perception of the pitch of those sounds. Low-frequency sounds are lower pitched, and high-frequency sounds are higher pitched. How does the auditory system differentiate among various pitches?

Several theories have been proposed to account for pitch perception. We’ll discuss two of them here: temporal theory and place theory. The temporal theory of pitch perception asserts that frequency is coded by the activity level of a sensory neurone. This would mean that a given hair cell would fire action potentials related to the frequency of the sound wave. Sometimes, we refer to this as phase-locking. While this is a very intuitive explanation, we detect such a broad range of frequencies (20–20,000 Hz) that the frequency of action potentials fired by hair cells cannot account for the entire range. Because of properties related to sodium channels on the neuronal membrane that are involved in action potentials, there is a point at which a cell cannot fire any faster (Shamma, 2001).

The place theory of pitch perception suggests that different portions of the basilar membrane are sensitive to sounds of different frequencies. More specifically, the base of the basilar membrane responds best to high frequencies and the tip of the basilar membrane responds best to low frequencies. Therefore, hair cells that are in the base portion would be labeled as high-pitch receptors, while those in the tip of basilar membrane would be labeled as low-pitch receptors (Shamma, 2001). We call this place coding.

In reality, both theories explain different aspects of pitch perception. At frequencies < 3000 Hz, it is clear that both the rate of action potentials (phase-locking) and place contribute to our perception of pitch. However, much higher frequency sounds (> 3000 Hz) can only be encoded using place cues (Shamma, 2001).

 

TRICKY TOPIC: AUDITORY DISCRIMINATION

If the video above does not load, click here: https://youtu.be/IH_etXAVz1k

For a full transcript of this video, click here

Sound Localization

The ability to locate sound in our environments is an important part of hearing. Localizing sound could be considered similar to the way that we perceive depth in our visual fields. Like the monocular and binocular cues that provided information about depth, the auditory system uses both monaural (one-eared) and binaural (two-eared) cues to localize sound.

Each pinna interacts with incoming sound waves differently, depending on the sound’s source relative to our bodies. This interaction provides a monaural cue that is helpful in locating sounds that occur above or below and in front or behind us. The sound waves received by your two ears from sounds that come from directly above, below, in front, or behind you would be identical; therefore, monaural cues are essential (Grothe et al., 2010).

Binaural cues, on the other hand, provide information on the location of a sound along a horizontal axis by relying on differences in patterns of vibration of the eardrum between our two ears. If a sound comes from an off-center location, it creates two types of binaural cues: interaural level differences and interaural timing differences. Interaural level difference refers to the fact that a sound coming from the right side of your body is more intense at your right ear than at your left ear because of the attenuation of the sound wave as it passes through your head. Interaural timing difference refers to the small difference in the time at which a given sound wave arrives at each ear (Figure 5.20). Certain brain areas monitor these differences to construct where along a horizontal axis a sound originates (Grothe et al., 2010).

A photograph of jets has an illustration of arced waves labeled “sound” coming from the jets. These extend to an outline of a human head, with arrows from the jets identifying the location of each ear.
Figure 5.20 Localizing sound involves the use of both monaural and binaural cues. (credit “plane”: modification of work by Max Pfandl)

Hearing Loss

Deafness is the partial or complete inability to hear. Some people are born without hearing, which is known as congenital deafness. Other people suffer from conductive hearing loss, which is due to a problem delivering sound energy to the cochlea. Causes for conductive hearing loss include blockage of the ear canal, a hole in the tympanic membrane, problems with the ossicles, or fluid in the space between the eardrum and cochlea. Another group of people suffer from sensorineural hearing loss, which is the most common form of hearing loss. Sensorineural hearing loss can be caused by many factors, such as aging, head or acoustic trauma, infections and diseases (such as measles or mumps), medications, environmental effects such as noise exposure (noise-induced hearing loss, as shown in Figure 5.21), tumours, and toxins (such as those found in certain solvents and metals).

Photograph A shows Beyoncé performing at a concert. Photograph B shows a construction worker operating a jackhammer.
Figure 5.21 Environmental factors that can lead to sensorineural hearing loss include regular exposure to loud music or construction equipment. (a) Musical performers and (b) construction workers are at risk for this type of hearing loss. (credit a: modification of work by “GillyBerlin_Flickr”/Flickr; credit b: modification of work by Nick Allen)

Given the mechanical nature by which the sound wave stimulus is transmitted from the eardrum through the ossicles to the oval window of the cochlea, some degree of hearing loss is inevitable. With conductive hearing loss, hearing problems are associated with a failure in the vibration of the eardrum and/or movement of the ossicles. These problems are often dealt with through devices like hearing aids that amplify incoming sound waves to make vibration of the eardrum and movement of the ossicles more likely to occur.

When the hearing problem is associated with a failure to transmit neural signals from the cochlea to the brain, it is called sensorineural hearing loss. One disease that results in sensorineural hearing loss is Ménière’s disease. Although not well understood, Ménière’s disease results in a degeneration of inner ear structures that can lead to hearing loss, tinnitus (constant ringing or buzzing), vertigo (a sense of spinning), and an increase in pressure within the inner ear (Semaan & Megerian, 2011). This kind of loss cannot be treated with hearing aids, but some individuals might be candidates for a cochlear implant as a treatment option. Cochlear implants are electronic devices that consist of a microphone, a speech processor, and an electrode array. The device receives incoming sound information and directly stimulates the auditory nerve to transmit information to the brain.

WHAT DO YOU THINK?

Deaf Culture

In Canada, the United States, and other places around the world, Deaf people have their own language, schools, and customs. This is called Deaf Culture. In Canada, there are two official sign languages: American Sign Language (ASL) and Quebec Sign Language, la langue des signes quebecoise (LSQ). ASL has no verbal component and is based entirely on visual signs and gestures. The primary mode of communication is signing. One of the values of Deaf Culture is to continue traditions like using sign language rather than teaching Deaf children to try to speak, read lips, or have cochlear implant surgery.

When a child is diagnosed as deaf, parents have difficult decisions to make. Should the child be enrolled in mainstream schools and taught to verbalize and read lips? Or should the child be sent to a school for Deaf children to learn ASL and have significant exposure to Deaf Culture? Do you think there might be differences in the way that parents approach these decisions depending on whether or not they are also Deaf?

License

Icon for the Creative Commons Attribution 4.0 International License

Introduction to Psychology & Neuroscience - MUN Edition Copyright © 2020 by Cheryll Fitzpatrick and Christina Thorpe is licensed under a Creative Commons Attribution 4.0 International License, except where otherwise noted.

Share This Book