4.1.2 Psychoacoustics

4.1.2 Psychoacoustics

Human hearing is a wondrous creation that in some ways we understand very well, and in other ways we don’t understand at all.  We can look at anatomy of the human ear and analyze – down to the level of tiny little hairs in the basilar membrane – how vibrations are received and transmitted through the nervous system.  But how this communication is translated by the brain into the subjective experience of sound and music remains a mystery.  (See (Levitin, 2007).)

We’ll probably never know how vibrations of air pressure are transformed into our marvelous experience of music and speech.  Still, a great deal has been learned from an analysis of the interplay among physics, the human anatomy, and perception.  This interplay is the realm of psychoacoustics, the scientific study of sound perception.  Any number of sources can give you the details of the anatomy of the human ear and how it receives and processes sound waves.  (Pohlman 2005), (Rossing, Moore, and Wheeler 2002), and (Everest and Pohlmann) are good sources, for example.  In this chapter, we want to focus on the elements that shed light on best practices in recording, encoding, processing, compressing, and playing digital sound.  Most important for our purposes is an examination of how humans subjectively perceive the frequencies, amplitude, and direction of sound.  A concept that appears repeatedly in this context is the non-linear nature of human sound perception.  Understanding this concept leads to a mathematical representation of sound that is modeled after the way we humans experience it, a representation well-suited for digital analysis and processing of sound, as we’ll see in what follows.  First, we need to be clear about the language we use in describing sound.