Skip to main content

What Is Music?

Do we really know what music is? When we speak, we transmit concrete messages. The thought conveyed may be abstract, or even nonsensical, but the sound content offers information. When we listen to environmental sounds--animal cries, thunder, rain--we do so because of the ancient drive to become aware of our surroundings by using all our senses. Listening confers a survival advantage; interpreting acoustical information offered by speech and by the environment is of fundamental biological importance. But what information does music transmit?

Music in nearly all musical cultures consists of organized, structured, rhythmic successions and superpositions of tones, selected from a very limited set of discrete pitches (the scales). Environmental sounds provide no direct equivalent, and imitating environmental sounds has never been the prime force driving the development of a musical culture. (Bird song is music to us, but to birds it's pure information--"This territory is claimed" or "This male is looking for a mate.")

Yet if music does not convey biologically relevant information to human brains, why does it affect us? Beautiful passages can give us goose pimples; awful ones can move us to rage. Crying babies calm down at the simple musical tones of their mother's song. Why do these things happen?

Before science can deal with these ultimate questions in music perception, it must first consider many more basic problems. For example, how do we perceive the pitch of the tone of an instrument? How does our brain distinguish between the tone of a clarinet and a violin? How do we distinguish between two tones of the same instrument sounded simultaneously? Why is the octave such a special musical interval, so special that we give notes differing by one or more octaves the same name? Answers are beginning to come from psychoacoustics, the science dealing with sound perception, and neuropsychology, the science dealing with the brain's cognitive and affective functions (intelligence and emotion) and related neural mechanisms.

Great physicists like George Ohm and Hermann von Helmholtz began studying these basic questions over 150 years ago. Today, the research is conducted by an interdisciplinary bunch of physicists, neuropsychologists, physiologists, engineers, and psychologists in many research laboratories all over the world--the University of Alaska included.

The results are fascinating but hard to describe. In a nutshell, it is increasingly clear that musical tone perception is a pattern recognition process, even for the simplest sound attributes like pitch and timbre.

Vision provides an analogy. Our visual sense detects specific geometrical features in images optically projected on the eye's retina; the brain's cognitive network then tries to recognize the image. For instance, the vertex and cross-line gives cognition of the letter A regardless of its size and orientation; the single closed contour is recognized as an O.

In the auditory system, incoming sound waves are mechanically converted into resonant oscillations of the basilar membrane, the sensory organ in the inner ear. The brain recognizes a particular vibration pattern as pertaining to a given musical tone, much as it recognized a particular visually perceived geometrical pattern as a given letter. The spatial arrangement of the vibration pattern along the basilar membrane leads to the sensation of subjective pitch; the relative intensities of the resonance regions caused by the overtones lead to the sensation of timbre. And--as one might guess--the total number of stimulated acoustical nerve fibers leads to the sensation of loudness.

Recent advances in research on the perception of musical tones have led to new theories in musicology to explain tonal consonance and dissonance, harmony, tonal dominance, the sense of tonal return, and many practical aspects ranging from instrument performance techniques to music hall acoustics.

Even these advances leave us far from answering the ultimate questions about the effects music has on people. These pertain to the more complex higher functions of the brain. Yet, obviously, music must have a survival value.

The clue may lie in the perception of speech. Since the acoustical patterns of speech are extremely complex, it is conceivable that as human language evolved, a drive emerged to train the acoustic sense in sophisticated sound pattern recognition. This may have developed as part of an inborn human instinct to acquire language from the moment of birth. Infants without this drive, or infants born to mothers without the drive to vocalize simple musical sounds, would have a decisive disadvantage for survival in their human environment.

What came along with the drive and the ability--fortunately!--was the motivation to listen to, analyze, store, and generate musical sounds--and the emotional reward when this is done.