I can remember a time, a generation ago in the 1970s, when these were still secrets. There was essentially no literature on the subject of editing and processing sound effects for movies. It had been done well and done poorly for fifty years by a small group of people who only taught what they knew to a few lucky apprentices.
Dry Ice (solid carbon dioxide) rings, screams, and chatters when a piece of metal is pressed against it. The result can be used to make the sound of an alien monster voice. What makes the sound useful is not the fact that it comes from an unexpected place (beware of falling in love with sounds simply because of their origin) but because in fortunate moments it is familiar and alien at the same time. That's a sound design secret worth remembering.
George Lucas' idea that the look
of "Star Wars" should embody what he called the "used future" was an
amazing flash of insight that Ben Burtt carried into the sound of the
films. Instead of using electronically synthesized sounds, like the
sci-fi films before "Star Wars," Ben recorded the ordinary objects around
him (and a few not so ordinary) then processed and manipulated those
sounds to make them believably foreign. The familar aspect of the sound
convinces us that what we hear is real in a way the sine waves in early
sci fi films never could. The exotic face of the same sound suggests
a dimension of reality we hadn't quite imagined before.
Amplified reality is the basic goal in action-adventure and sci fi sequences. How do you produce sounds that have "amplified reality?" You begin by trying to forget for a while what the Nazi tank in an Indiana Jones film would "really" sound like, and start thinking about what it would FEEL LIKE in a nightmare. The treads would be like spinning samurai blades. The engine would be like the growl of an angry beast. You then go out and find sounds that have those qualities, or alter sounds to make them have those qualities. It makes no difference whether the sounds you collect actually have anything to do with tanks, samurai blades, or growling animals. The essential emotional quality of the sounds is virtually ALL that matters. When you find the sounds that make you believe a screaming mechanical beast is about to rip you to shreds with enormous spinning blades, then, and only then, do you bring in actual recordings of a tank and blend them with the nightmare elements. If the "volume envelope" of the real and non-real sounds is similar, if they are in sync, if their pitch varies roughly the same amount from moment to moment, and their reverberation characteristics are similar then the nightmare sounds will tend to hide themselves from the conscious awareness of the audience. But their presence will never-the-less be felt.
Using the nightmare approach, an apple being bitten is a useful element in the sound of a spear piercing flesh, chalk squeaking across a blackboard becomes part of a rocket engine screaming through the air, and the muzzle blast of a howitzer is slightly modified to become the muzzle blast of a 9mm pistol.
Most sound effects -- contrary
to popular belief, even for science fiction films -- are not "synthesized."
It's theoretically possible to synthesize them, starting from scratch
with nothing but oscillators, but it isn't an efficient way to work.
Real sounds are really complex. It takes so much time to synthesize
them that it's still a lot quicker to just go out and record real-world
sounds, then "sci-fi" or otherwise modify them by using various kinds
At the moment, the most common tool of choice for "designing" sounds is Digidesign's ProTools workstation. It is by far the most versatile in terms of allowing you to alter the pitch, equalization, dynamics, and reverberation of any given sound, as well as more exotic processing like ring modulation. This is largely because of the proliferation of so-called "third party plug-ins" made by other companies to work in the ProTools environment. Its current 16-bit limitation has caused some golden-eared types to turn up their noses at ProTools, but in practical terms -- at least for sound effects and dialog in my humble opinion -- 16 bits is just fine. When 20 or 24 bits comes it'll be noticeably, but not enormously, better . . . and that'll be great.
Some sound designers prefer to work mostly with samplers, which operate in "ram" rather than on a hard drive, and the sounds are usually triggered to play and stop playing with a piano-style keyboard. In some ways, the old Synclavier (which is a combination sampler and sequencer - the sequencer allowing the keystrokes to be remembered) has never been surpassed in its ability to quickly manipulate lots of sounds.
Others prefer hard disk based workstations.
ProTools, which was designed around hard disk storage, can be integrated
with a sampler as well. The
kinds of processing available have increased because of digital technology,
though the vast majority of processing still involves altering the pitch,
eq, introducing phasing or flanging, adding reverb, and playing things
backwards -- all techniques which have been used for many decades.
Though it's true that in in order to produce the sound effects for a typical action scene you usually have to begin with hundreds of tracks, you almost never want to actually hear all of those sounds at the same time in the finished mix. Whenever you hear more than three or four sounds at once they become noise, which, by definition, is meaningless. The goal in mixing is not to attempt to "mix" the largest number of sounds possible together. In fact the goal probably should be to eliminate as many of them as you can. As you mix you are assigning priorities from moment to moment to the sounds you have available; constantly making decisions about which sounds to feature, which to play in the background, and which to eliminate.
The music department typically has no idea what the sound effects department is doing, and visa versa. Producers and Directors should see to it that there is some coordination of the two, but they virtually never do. Lack of coordination of this kind is rare during production. But most Producers and Directors are so befuddled about sound that the best they can usually manage is to tell both the music and sound effects departments to throw as much crap as possible at the canvas, trusting that some sense can be made of it in the final mix.
It ends up being about choices, of course. Invariably, the last pass in a final mix before print mastering consists of lowering sound effects here and there, raising dialog, and mostly raising music. The sound designer is secretly happy that some of his material was lowered to inaudibility. He'll feel less guilty using that stuff on the next project!
On a not completely unrelated subject,
I want to congratulate the Sound Branch of the Academy Of Motion Picture
Arts and Sciences for nominating "The English Patient" this year for
an Oscar. Before "Star Wars," most of the films nominated for Best Sound
were nominated because of their music. Since "Star Wars," most have
been nominated because of their sound effects. The mixes in the vast
majority of these films could never be accused of subtlety. "The English
Patient" is one of those rare sound nominees whose music and sound effects
were extremely well-crafted without being overstated; powerful but not
obvious; complementary, not antagonistic. "The English Patient" may
or may not have been the "Best" film sound job this year, but the Sound
Branch deserves some credit for recognizing that great sound and ear
damage aren't always mutually dependent.
Learning Space for Film Sound
|Star Wars Sounds||Film Sound Clichés||Film Sound History||Movie Sound Articles||Bibliography|
|Questions & Answers||Game Audio||Animation Sound||Glossaries||Randy Thom Articles|
|Walter Murch Articles||Foley Artistry||Sci-Fi Film Sound||Film Music||Home Theatre Sound|
|Theoretical Texts||Sound Effects Libraries||Miscellaneous|