8.2.4.1 Designing a Sound Delivery System

Theatre and concert performances introduce unique challenges in pre-production not present in sound for CD, DVD, film, or video due to the fact that the sound is delivered live.  One of the most important parts of the process in this context is the design of a sound delivery system.  The purpose of the design is to ensure clarity of sound and a uniform experience among audience members.

In a live performance, it’s quite possible that when the performers on the stage create their sound, that sound does not arrive at the audience loudly or clearly enough to be intelligible. A sound designer or sound engineer is hired to design a sound reinforcement system to address this problem. The basic process is to use microphones near the performers to pick up whatever sound they’re making and then play that sound out of strategically-located loudspeakers.

There are several things to consider when designing and operating a sound reinforcement system:

  • The loudspeakers must faithfully generate a loud enough sound.
  • The microphones must pick up the source sound as faithfully as possible without getting in the way.
  • The loudspeakers must be positioned in a way that will direct the sound to the listeners without sending too much sound to the walls or back to the microphones. This is because reflections and reverberations affect intelligibility and gain.
  • Ideally, the sound system will deliver a similar listening experience to all the listeners regardless of where they sit.

Many of these considerations can be analyzed before you purchase the sound equipment so that you can spend your money wisely, you can Discover the best audio equipment at Sound Manual. Also, once the equipment is installed, the system can be tested and adjusted for better performance. These adjustments include repositioning microphones and loudspeakers to improve gain and frequency response, replacing equipment with something else that performs better, and adjusting the settings on equalizers, compressors, crossovers, and power amplifiers.

Most loudspeakers have a certain amount of directivity. Loudspeaker directivity is described in terms of the 6 dB down point – a horizontal and vertical angle off-axis corresponding to the location where the sound is reduced by 6 dB.  The 6 dB down point is significant because, as a rule of thumb, you want the loudness at any two points in the audience to differ by no more than 6 dB. In other words, the seat on the end of the aisle shouldn’t sound more than 6 dB quieter or louder than the seat in the middle of the row, or anywhere else in the audience.

The issue of loudspeaker directivity is complicated by the fact that loudspeakers naturally have a different directivity for each frequency. A single circular loudspeaker driver is more directional as the frequency increases because the loudspeaker diameter gets larger relative to the wavelength of the frequency. This high-frequency directivity effect is illustrated in Figure 8.21. Each of the six plots in the figure represents a different frequency produced by the same circular loudspeaker driver. In the figures,  is the wavelength of the sound.  (Recall that the higher the frequency, the smaller the wavelength.  See Chapter 2 for the definition of wavelength, and see Chapter 1 for an explanation of how to read a polar plot.)

Going from top to bottom, left to right in Figure 8.21, the frequencies being depicted get smaller.  Notice that frequencies having a wavelength that is longer than the diameter of the loudspeaker are dispersed very widely, as shown in the first two polar plots. Once the frequency has a wavelength that is equal to the diameter of the loudspeaker, the loudspeaker begins to exercise some directional control over the sound. This directivity gets narrower as the frequency increases and the wavelength decreases.

Figure 8.21 Directivity of circular radiators. Diagrams created from actual measured sound
Figure 8.21 Directivity of circular radiators. Diagrams created from actual measured sound

This varying directivity per frequency for a single loudspeaker driver partially explains why most full-range loudspeakers have multiple drivers. The problem is not that a single loudspeaker can’t produce the entire audible spectrum. Any set of headphones uses a single driver for the entire spectrum. The problem with using one loudspeaker driver for the entire spectrum is that you can’t distribute all the frequencies uniformly across the listening area. The listeners sitting right in front of the loudspeaker will hear everything fine, but for the listeners sitting to the side of the loudspeaker, the low frequencies will be much louder than the high ones. To distribute frequencies more uniformly, a second loudspeaker driver can be added, considerably smaller than the first.  Then an electronic unit called a crossover directs the high frequencies to the small driver and the low frequencies to the large driver.  We two different-size drivers, you can achieve a much more uniform directional dispersion, as shown in Figure 8.22. In this case, the larger driver is 5″ in diameter and the smaller one is 1″ in diameter. Wavelengths corresponding to frequencies of 500 Hz and 1000 Hz have larger wavelengths than 5″, so they are fairly omnidirectional. The reason that frequencies of 2000 Hz and above have consistent directivity is that the frequencies are distributed to the two loudspeaker drivers in a way that keeps the relationship consistent between the wavelength and the diameter of the driver. The 2000 Hz and 4000 Hz frequencies would be directed through the 5” diameter driver because their wavelengths are between 6” and3”. The 8000 Hz and 16,000 Hz frequencies would be distributed to the 1” diameter driver because their wavelengths are between 2” and1”. This way the two different size drivers are able to exercise directional control over the frequencies that are radiating.

Figure 8.22 Directivity of 2-way loudspeaker system with 5" and 1" diameter drivers
Figure 8.22 Directivity of 2-way loudspeaker system with 5″ and 1″ diameter drivers

There are many other strategies used by loudspeaker designers to get consistent pattern control, but all must take into account the size of the loudspeaker drivers and way in which they affect frequencies. You can simply look at any loudspeaker and easily determine the lowest possible directional frequency based on the loudspeaker’s size.

Understanding how a loudspeaker exercises directional control over the sound it radiates can also help you decide where to install and aim a loudspeaker to provide consistent sound levels across the area of your audience. Using the inverse square law in conjunction with the loudspeaker directivity information, you can find a solution that provides even sound coverage over a large audience area using a single loudspeaker. (The inverse square law is introduced in Chapter 4.)

Consider the example 1000 Hz vertical polar plot for a loudspeaker shown in Figure 8.23. If you’re going to use that loudspeaker in the theatre shown in Figure 8.24, where do you aim the loudspeaker?

Figure 8.23 Vertical 1000 Hz polar plot for a loudspeaker
Figure 8.23 Vertical 1000 Hz polar plot for a loudspeaker
Figure 8.24 Section view of audience area with distances and angles for a loudspeaker
Figure 8.24 Section view of audience area with distances and angles for a loudspeaker

Most beginning sound system designers will choose to aim the loudspeaker at seat B thinking that it will keep the entire audience as close as possible to the on-axis point of the loudspeaker. To test the idea, we can calculate the dB loss over distance using the inverse square law for each seat and then subtract any additional dB loss incurred by going off-axis from the loudspeaker. Seat B is directly on axis with the loudspeaker, and according to the polar plot there is a loss of approximately 2 dB at 0 degrees. Seat A is 33 degrees down from the on-axis point of the loudspeaker, corresponding to 327 degrees on the polar plot, which shows an approximate loss of 3 dB. Seat C is 14 degrees off axis from the loudspeaker, resulting in a loss of 6 dB according to the polar plot. Assuming that the loudspeaker is outputting 100 dBSPL at 1 meter (3.28 feet), we can calculate the dBSPL level for each seat as shown in Table 8.1.

[listtable width=50% caption=””]

  • A
    • $$Seat\: A\: dBSPL = 100 dB + \left ( 20\log_{10}\frac{3.28′}{33.17′} \right )-3 dB$$
    • $$Seat\: A\: dBSPL = 100 dB + \left ( 20\log_{10}0.1 \right )-3 dB$$
    • $$Seat\: A\: dBSPL = 100 dB + \left ( 20\ast -1 \right )-3 dB$$
    • $$Seat\: A\: dBSPL = 100 dB + \left ( -20\right )-3 dB$$
    • $$Seat\: A\: dBSPL = 77\, dBSPL$$
  • B
    • $$Seat\: B\: dBSPL = 100 dB + \left ( 20\log_{10}\frac{3.28′}{50.53′} \right )-2 dB$$
    • $$Seat\: B\: dBSPL = 100 dB + \left ( 20\log_{10}0.06 \right )-2 dB$$
    • $$Seat\: B\: dBSPL = 100 dB + \left ( 20\ast -1.19 \right )-2 dB$$
    • $$Seat\: B\: dBSPL = 100 dB + \left ( -23.75\right )-2 dB$$
    • $$Seat\: B\: dBSPL = 74.25\, dBSPL$$
  • C
    • $$Seat\: C\: dBSPL = 100 dB + \left ( 20\log_{10}\frac{3.28′}{77.31′} \right )-6 dB$$
    • $$Seat\: C\: dBSPL = 100 dB + \left ( 20\log_{10}0.04 \right )-6 dB$$
    • $$Seat\: C\: dBSPL = 100 dB + \left ( 20\ast -1.37 \right )-6 dB$$
    • $$Seat\: C\: dBSPL = 100 dB + \left ( -27.45\right )-6 dB$$
    • $$Seat\: C\: dBSPL = 66.55\, dBSPL$$

[/listtable]

Table 8.1 Calculating dBSPL of a given loudspeaker aimed on-axis with seat B

In this case the loudest seat is seat A at 77 dBSPL, and seat C is the quietest at 66.55 dBSPL, with a 10.45 dB difference. As discussed, we want all the audience locations to be within a 6 dB range. But before we throw this loudspeaker away and try to find one that works better, let’s take a moment to examine the reasons why we have such a poor result. The reason seat C is so much quieter than the other seats is that it is the farthest away from the loudspeaker and is receiving the largest reduction due to directivity. By comparison, A is the closest to the loudspeaker, resulting in the lowest loss over distance and only a 3 dB reduction due to directivity. To even this out let’s try having the farthest seat away be the seat with the least directivity loss, and the closest seat to the loudspeaker have the most directivity loss.

The angle with the least directivity loss is around 350 degrees, so if we aim the loudspeaker so that seat C lines up with that 350 degree point, that seat will have no directivity loss. With that aim point, seat B will then have a directivity loss of 3 dB, and seat A will have a directivity loss of 10 dB. Now we can recalculate the dBSPL for each seat as shown in Table 8.2.

[listtable width=50% caption=””]

  • A
    • $$Seat\: A\: dBSPL = 100 dB + \left ( 20\log_{10}\frac{3.28′}{33.17′} \right )-10 dB$$
    • $$Seat\: A\: dBSPL = 100 dB + \left ( 20\log_{10}0.1 \right )-10 dB$$
    • $$Seat\: A\: dBSPL = 100 dB + \left ( 20\ast -1 \right )-10 dB$$
    • $$Seat\: A\: dBSPL = 100 dB + \left ( -20\right )-10 dB$$
    • $$Seat\: A\: dBSPL = 70\, dBSPL$$
  • B
    • $$Seat\: B\: dBSPL = 100 dB + \left ( 20\log_{10}\frac{3.28′}{50.53′} \right )-3 dB$$
    • $$Seat\: B\: dBSPL = 100 dB + \left ( 20\log_{10}0.06 \right )-3 dB$$
    • $$Seat\: B\: dBSPL = 100 dB + \left ( 20\ast -1.19 \right )-3 dB$$
    • $$Seat\: B\: dBSPL = 100 dB + \left ( -23.75\right )-3 dB$$
    • $$Seat\: B\: dBSPL = 73.25\, dBSPL$$
  • C
    • $$Seat\: C\: dBSPL = 100 dB + \left ( 20\log_{10}\frac{3.28′}{77.31′} \right )-0 dB$$
    • $$Seat\: C\: dBSPL = 100 dB + \left ( 20\log_{10}0.04 \right )-0 dB$$
    • $$Seat\: C\: dBSPL = 100 dB + \left ( 20\ast -1.37 \right )-0 dB$$
    • $$Seat\: C\: dBSPL = 100 dB + \left ( -27.45\right )-0 dB$$
    • $$Seat\: C\: dBSPL = 72.55\, dBSPL$$

[/listtable]

Table 8.2 Calculating dBSPL of a given loudspeaker aimed on-axis with seat C

In this case our loudest seat is seat B at 73.25 dB SPL, and our quietest seat is seat A at 70 dBSPL, for a difference of 3.25 dB. Compared with the previous difference of 10.55 dB, we now have a much more even distribution of sound to the point where most listeners will hardly notice the difference. Before we fully commit to this plan, we have to test these angles at several different frequencies, but this example serves to illustrate an important rule of thumb when aiming loudspeakers. In most cases, the best course of action is to aim the loudspeaker at the farthest seat, and have the closest seat be the farthest off-axis to the loudspeaker. This way, as you move from the closest seat to the farthest seat, while you’re losing dB over the extra distance you’re also gaining dB by moving more directly on-axis with the loudspeaker.

[aside]EASE was developed by German engineers ADA (Acoustic Design Ahnert) in 1990 and introduced at the 88th AES Convention.  That’s also the same year that Microsoft announced Windows 3.0.[/aside]

Fortunately there are software tools that can help you determine the best loudspeakers to use and the best way to deploy them in your space. These tools range in price from free solutions such as MAPP Online Pro from Meyer Sound shown in Figure 8.25 to relatively expensive commercial products like EASE from the Ahnert Feistel Media Group, shown in Figure 8.26. These programs allow you to create a 2D or 3D drawing of the room and place virtual loudspeakers in the drawing to see how they disperse the sound. The virtual loudspeaker files come in several formats. The most common is the EASE format. EASE is the most expensive and comprehensive solution out there, and fortunately most other programs have the ability to import EASE loudspeaker files. Another format is the Common Loudspeaker Format (CLF). CLF files use an open format, and many manufacturers are starting to publish their loudspeaker data in CLF. Information on loudspeaker modeling software that uses CLF can be found at the website for the Common Loudspeaker Format Group http://www.clfgroup.org.

Figure 8.25 MAPP Online Pro software from Meyer Sound
Figure 8.25 MAPP Online Pro software from Meyer Sound
Figure 8.26 EASE software
Figure 8.26 EASE software

8.2.4.2 System Documentation

Once you’ve decided on a loudspeaker system that distributes the sound the way you want, you need to begin the process of designing the systems that capture the sound of the performance and feed it into the loudspeaker system. Typically this involves creating a set of drawings that give you the opportunity to think through the entire sound system and explain to others – installers, contractors, or operators, for example – how the system will function.

[aside]You can read the entire USITT document on System Diagram guidelines by visiting the USITT website.[/aside]

The first diagram to create is the System Diagram. This is similar in function to an electrical circuit diagram, showing you which parts are used and how they’re wired up.  The sound system diagram shows how all the components of a sound system connect together in the audio signal chain, starting from the microphones and other input devices all the way through to the loudspeakers that reproduce that sound. These diagrams can be created digitally with vector drawing programs such as AutoCAD and VectorWorks or diagramming programs such as Visio and OmniGraffle.

The United States Institute for Theatre Technology has published some guidelines for creating system diagrams. The most common symbol or block used in system diagrams is the generic device block shown in Figure 8.27. The EQUIPMENT TYPE label should be replaced with a descriptive term such a CD PLAYER or MIXING CONSOLE. You can also specify the exact make and model of the equipment in the label above the block.

Figure 8.27 A generic device block for system diagrams
Figure 8.27 A generic device block for system diagrams

There are also symbols to represent microphones, power amplifiers, and loudspeakers. You can connect all the various symbols to represent an entire sound system. Figure 8.28 shows a very small sound system, and Figure 8.29 shows a full system diagram for a small musical theatre production.

Figure 8.28 A small system diagram
Figure 8.28 A small system diagram
Figure 8.29 System diagram for a full sound system
Figure 8.29 System diagram for a full sound system

While the system diagram shows the basic signal flow for the entire sound system, there is a lot of detail missing about the specific interconnections between devices. This is where a patch plot can be helpful. A patch plot is essentially a spreadsheet that shows every connection point in the sound system. You should be able to use the patch plot to determine which and how many cables you’ll need for the sound system.  It can also be a useful tool in troubleshooting a sound system that isn’t behaving properly. The majority of the time when things go wrong with your sound system or something isn’t working, it’s because it isn’t connected properly or one of the cables has been damaged. A good patch plot can help you find the problem by showing you where all the connections are located in the signal path. There is no industry standard for creating a patch plot, but the rule of thumb is to err on the side of too much information. You want every possible detail about every audio connection made in the sound system. Sometimes color coding can help make the patch plot easier to understand. Figure 8.30 shows an example patch plot for the sound system in Figure 8.28.

Figure 8.30 Patch plot for a simple sound system
Figure 8.30 Patch plot for a simple sound system

8.2.4.3 Sound Analysis Systems

[aside]Acoustic systems are systems in which the sounds produced depend on the shape and material of the sound-producing instruments. Electroacoustic systems produce sound through electronic technology such as amplifiers and loudspeakers.[/aside]

Section 8.2.4.1 discussed mathematical methods and tools that help you to determine were loudspeakers should be placed to maximize clarity and minimize the differences in what is heard in different locations in an auditorium.  However, even with good loudspeaker placement, you’ll find there are differences between the original sound signal and how it sounds when it arrives as the listener.  Different frequency components respond differently to their environment, and frequency components interact with each other as sounds from multiple sources combine in the air.  The question is, how are these frequencies heard by the audience once they pass through loudspeakers and travel through space encountering obstructions, varying air temperatures, comb filtering, and so forth? Is each frequency arriving at the audience’s ears at the desired amplitude? Are certain frequencies too loud or too quiet?  If the high frequencies are too quiet, you could sacrifice the brightness or clarity in the sound.  Low frequencies that are too quiet could result in muffled voices.  There are no clear guidelines on what the “right” frequency response is because it usually boils down to personal preference, artistic considerations, performance styles, and so forth.  In any case, before you can decide if you have a problem, the first step is to analyze the frequency response in your environment. With practice you can hear and identify frequencies, but sometimes being able to see the frequencies can help you to diagnose and solve problems. This is especially true when you’re setting up the sound system for a live performance in a theatre.

A sound analysis system is one of the fundamental tools for ensuring that frequencies are being received at proper levels. The system consists of a computer running the analysis software, an audio interface with inputs and outputs, and a special analysis microphone.  An analysis microphone is different from a traditional recording microphone. Most recording microphones have a varying response or sensitivity at different frequencies across the spectrum. This is often a desired result of their manufacturing and design, and part of what gives each microphone its unique sound. For analyzing acoustic or electroacoustic systems, you need a microphone that measures all frequencies equally.  This is often referred to as having a flat response.  In addition, most microphones are directional. They pick up sound better in the front than in the back. A good analysis microphone should be omnidirectional so it can pick up the sound coming at it from all directions. Figure 8.31 shows a popular analysis microphone from Earthworks.

Figure 8.31 Earthworks M30 analysis microphone
Figure 8.31 Earthworks M30 analysis microphone

There are many choices for analysis software, but they all fall into two main categories: signal dependent and signal independent.  Signal dependent sound analysis systems rely on a known stimulus signal that the software generates – e.g., a sine wave sweep.  A sine wave sweep is a sound that begins at a low frequency sine wave and smoothly moves up in frequency to some given high frequency limit.  The sweep, lasting a few seconds or less, is sent by a direct cable connection to the loudspeaker. You then place your analysis microphone at the listening location you want to analyze. The microphone picks up the sound radiated by the loudspeaker so that you can compare what the microphone picks up with what was actually sent out.

The analysis software records and stores the information in a file called an impulse response.  The impulse response is a graph of the sound wave with time on the x-axis and the amplitude of the sound wave on the y-axis.  This same information can be displayed in a frequency response graph, which has frequencies on the x-axis and the amplitude of each frequency on the y-axis.  (In Chapter 7, we’ll explain the mathematics that transforms the impulse response graph to the frequency response graph, and vice versa.) Figure 8.32 shows an example frequency response graph created by the procedure just described.

Figure 8.32 Frequency response graph created from a signal dependent sound analysis system
Figure 8.32 Frequency response graph created from a signal dependent sound analysis system

Figure 8.33 shows a screenshot from FuzzMeasure Pro, a signal dependent analysis program that runs on the Mac operating system.  The frequency response is on the top, and the impulse response is at the bottom.  As you recall from Chapter 2, the frequency response has frequencies on the horizontal axis and amplitudes of these frequency components on the vertical axis.  It should how the frequencies “responded” to their environment as they moved from the loudspeaker to the microphone.  We know that the sine wave emitted had frequencies distributed evenly across the audible spectrum, so if the sound was not affected in passage, the frequency response graph should be flat.  But notice in the graph that the frequencies between 30 Hz and 500 Hz are 6 to 10 dB louder than the rest, which is their response to the environment.

Figure 8.33 FuzzMeasure Pro sound analysis software
Figure 8.33 FuzzMeasure Pro sound analysis software

When you look at an analysis such as this, it’s up to you to decide if you’ve identified a problem that you want to solve. Keep in mind that the goal isn’t necessarily to make the frequency response graph be a straight line, indicating all frequencies are of equal amplitude. The goal is to make the right kind of sound. Before you can decide what to do, you need to determine why the frequency response sounds like this. There are many possible reasons.  It could be that you’re too far off-axis from the loudspeaker generating the sound. That’s not a problem you can really solve when you’re analyzing a listening space for a large audience, since not everyone can sit in the prime location. You could move the analysis microphone so that you’re on-axis with the loudspeaker, but you can’t fix the off-axis frequency response for the loudspeaker itself.  In the example shown in Figure 8.34 the loudspeaker system that is generating the sound uses two sets of sound radiators. One set of loudspeakers generates the frequencies above 500 Hz. The other set generates the frequencies below 500 Hz. Given that information, you could conclude that the low-frequency loudspeakers are simply louder than the high frequency ones. If this is causing a sound that you don’t want, you could fix it by reducing the level of the low-frequency loudspeakers.

Figure 8.34 Frequency response graph showing a low frequency boost
Figure 8.34 Frequency response graph showing a low frequency boost

Figure 8.35 shows the result of after this correction. The grey line shows the original frequency response and the black line shows the frequency response after reducing the amplitude of the low-frequency loudspeakers by 6 dB.

Figure 8.34 Frequency response graph showing a low frequency boost
Figure 8.34 Frequency response graph showing a low frequency boost

The previous example gives you a sketch of how a sound analysis system might be used. You place yourself in a chosen position in a room where sound is to be performed or played, generate sound that is played through loudspeakers, and then measure the sound as it is received at your chosen position. The frequencies that are actually detected may not be precisely the frequency components of the original sound that was generated or played.   By looking at the difference between what you played and what you are able to measure, you can analyze the frequency response of your loudspeakers, the acoustics of your room, or a combination of the two. The frequencies that are measured by the sound analysis system are dependent not only on the sound originally produced, but also on the loudspeakers’ types and positions, the location of the listener in the room, and the acoustics of the room. Thus, in addition to measuring the frequency response of your loudspeakers, the sound analysis system can help you to determine if different locations in the room vary significantly in their frequency response, leaving it to you to decide if this is a problem and what factor might be the source.

The advantage to a signal dependent system is that it’s easy to use, and with it you can get a good general picture of how frequencies will sound in a given acoustic space with certain loudspeakers. You also can save the frequency response graphs to refer to and analyze later. The disadvantage to a signal dependent analysis system is that it uses only artificially-generated signals like sine sweeps, not real music or performances.

If you want to analyze actual music or performances, you need to use a signal independent analysis system. These systems allow you to analyze the frequency response recorded music, voice, sound effects, or even live performances as they sound in your acoustic space. In contrast to systems like FuzzMeasure, which know the precise sweep of frequencies they’re generating, signal independent systems must be given a direct copy of the sound being played so that the original sound can be compared with the sound that passes through the air and is received by the analysis microphone. This is accomplished by taking the original sound and sending one copy of it to the loudspeakers while a second copy is sent directly, via cable, to the sound analysis software. The software presumably is running on a computer that has a sound card attached with two sound inputs. One of the inputs is the analysis microphone and one is a direct feed from the sound source. The software compares the two signals in real time – as the music or sound is played – and tells you what is different about them.

The advantage of the signal independent system is that it can analyze “real” sound as it is being played or performed. However, real sound has frequency components that constantly change, as we can tell from the constantly changing pitches that we hear. Thus, there isn’t one fixed frequency response graph that gives you a picture of how your loudspeakers and room are dealing with the frequencies of the sound. The graph changes dynamically over the entire time that the sound is played. For this reason, you can’t simply save one graph and carry it off with you for analysis. Instead, your analysis consists of observing the constantly-changing frequency response graph in real time, as the sound is played. If you wanted to save a single frequency response graph, you’d have to do what we did to generate Figure 8.36 – that is, get a “screen capture” of the frequency response graph at a specific moment in time – and the information you have is about only that moment. Another disadvantage of signal independent systems is that they analyze the noise in the environment along with the desired sound.

Figure 8.36 was produced from a popular signal independent analysis program called Smaart Live, which runs on Windows and Mac operating systems. The graph shows the difference, in decibels, between the amplitudes of the frequencies played vs. those received by the analysis microphone. Because this is only a snapshot in time, coupled with the fact that noise is measured as well, it isn’t very informative to look at just one graph like this. Being able to glean useful information from a signal independent sound analysis system comes from experience in working with real sound – learning how to compare what you want, what you see, what you understand is going on mathematically, and – most importantly – what you hear.

Figure 8.36 Smaart Live sound analysis software
Figure 8.36 Smaart Live sound analysis software

8.2.4.4 System Optimization

Once you have the sound system installed and everything is functioning, the system needs to be optimized. System optimization is a process of tuning and adjusting the various components of the sound system so that

  • they’re operating at the proper volume levels,
  • the frequency response of the sound system is consistent and desirable,
  • destructive interactions between system components and the acoustical environment have been minimized, and
  • the timing of the various system components has been adjusted so the audience hears the sounds at the right time.

The first optimization you should perform applied to the gain structure of the sound system. When working with sound systems in either a live performance or recording situation, gain structure is a big concern. In a live performance situation, the goal is to amplify sound. In order to achieve the highest potential for loudness, you need to get each device in your system operating at the highest level possible so you don’t lose any volume as the sound travels through the system. In a recording situation, you’re primarily concerned with signal-to-noise ratio. In both of these cases, good gain structure is the solution.

In order to understand gain structure, you first need to understand that all sound equipment makes noise. All sound devices also contains amplifiers. What you want to do is amplify the sound without amplifying the noise. In a sound system with good gain structure, every device is receiving and sending sound at the highest level possible without clipping. Lining up the gain for each device involves lining up the clip points. You can do this by starting with the first device in your signal chain – typically a microphone or some sort of playback device. It’s easier to set up gain structure using a playback source because you can control the output volume. Start by playing something on the CD, synthesizer, computer, iPod or whatever your playback device is in a way that outputs the highest volume possible. This is usually done with either normalized pink noise or a normalized sine wave. Turn up the gain preamplifier on the mixing console or sound card input so that the level coming from the playback source clips the input. Then back off the gain until that sound is just below clipping. If you’re recording this sound, your gain structure is now complete. Just repeat this process for each input. If it’s a live performer on a microphone, ask him to perform at the highest volume they expect to generate and adjust the input gain accordingly.

[wpfilebase tag=file id=145 tpl=supplement /]

If you’re in a live situation, the mixing console will likely feed its sound into another device such as a processor or power amplifier. With the normalized audio from your playback source still running, adjust the output level of the mixing console so it’s also just below clipping. Then adjust the input level of the next device in the signal chain so that it’s receiving this signal at just below its clipping point. Repeat this process until you’ve adjusted every input and output in your sound system. At this point, everything should clip at the same time. If you increase the level of the playback source or input preamplifier on the mixing console, you should see every meter in your system register a clipped signal. If you’ve done this correctly, you should now have plenty of sound coming from your sound system without any hiss or other noise. If the sound system is too loud, simply turn down the last device in the signal chain. Usually this is the power amplifier.

Setting up proper gain structure in a sound system is fairly simple once you’re familiar with the process. The Max demo on gain structure associated with this section gives you an opportunity to practice the technique. Then you should be ready to line up the gain for your own systems.

Once you have the gain structure optimized, the next thing you need to do is try to minimize destructive interactions between loudspeakers. One reason that loudspeaker directivity is important is due to the potential for multiple loudspeakers to interact destructively if their coverage overlaps in physical space. Most loudspeakers can exercise some directional control over frequencies higher than 1 kHz, but frequencies lower than 1 kHz tend to be fairly omnidirectional, which means they will more easily run into each other in the air. The basic strategy to avoid destructive interactions is to adjust the angle between two loudspeakers so their coverage zone intersects at the same dBSPL, and at the point in the coverage pattern where they are 6 dB quieter than the on-axis level, as shown in Figure 8.37. This overlap point is the only place where the two loudspeakers combine at the same level. If you can pull that off, you can then adjust the timing of the loudspeakers so they’re perfectly in phase at that overlap point. Destructive interaction is eliminated because the waves reinforce each other, creating a 6 dB boost that eliminates the dip in sound level at high frequencies.   The result is that there is even sound across the covered area. The small number of listeners who happen to be sitting in an area of overlap between two loudspeakers will effectively be covered by a virtual coherent loudspeaker.

When you move away from that perfect overlap point, one loudspeaker gets louder as you move closer to it, while the other gets quieter as you move farther away. This is handy for two reasons. First, the overall combined level should remain pretty consistent at any angle as you move through the perfect overlap point. Second, for any angle outside of that perfect overlap point, while the timing relationship between the two loudspeaker arrivals begins to differ, the loudspeakers also differ more and more in level. As pure comb filtering requires both of the interacting signals to be at the same amplitude, the level difference greatly reduces the effect of the comb filtering introduced by the shift in timing. The place where the sound from the two loudspeakers arrives at the same amplitude and comb filters the most is at center of the overlap, but this is the place where we aligned the timing perfectly to prevent comb filtering in the first place. With this technique, not only do you get the wider coverage that comes with multiple loudspeakers, but you also get to avoid the comb filtering!

Figure 8.37 Minimizing comb filtering between two loudspeakers
Figure 8.37 Minimizing comb filtering between two loudspeakers

[wpfilebase tag=file id=134 tpl=supplement /]

What about the low frequencies in this example? Well, they’re going to run into each other at similar amplitudes all around the room because they’re more omnidirectional than the high frequencies. However, they also have longer wavelengths, which means they require much larger offsets in time to cause destructive interaction. Consequently, they largely reinforce each other, giving an overall low frequency boost. Sometimes this free bass boost sounds good. If not, you can easily fix it with a system EQ adjustment by adding a low shelf filter that reduces the low frequencies by a certain amount to flatten out the frequency response of the system. This process is demonstrated in our video on loudspeaker interaction.

You should work with your loudspeakers in smaller groups, sometimes called systems. A center cluster of loudspeakers being used to cover the entire listening area from a single point source would be considered a system. You need to work with all the loudspeakers in that cluster to ensure they are working well together. A row of front fill loudspeakers at the edge of the stage being used to cover the front few rows will also need to be optimized as an individual system.

Once you have each loudspeaker system optimized, you need to work with all the systems together to ensure they don’t destructively interact with each other. This typically involves manipulating the timing of each system. There are two main strategies for time aligning loudspeaker systems. You can line the system up for coherence, or you can line the system up for precedence imaging. The coherence strategy involves working with each loudspeaker system to ensure that their coverage areas are as isolated as possible. This process is very similar to the process we described above for aligning the splay angles of two loudspeakers. In this case, you’re doing the same thing for two loudspeaker systems. If you can line up two different systems so that the 6 dB down point of each system lands in the same point in space, you can then apply delay to the system arriving first so that both systems arrive at the same time, causing a perfect reinforcement. If you can pull this off for the entire sound system and the entire listening area, the listeners will effectively be listening to a single, giant loudspeaker with optimal coherence.

The natural propagation of sound in an acoustic space is inherently not very coherent due to the reflection and absorption of sound, resulting in destructive and constructive interactions that vary across the listening area. This lack of natural coherence is often the reason that a sound reinforcement system is installed in the first place. A sound system that has been optimized for coherence has the characteristic of sounding very clear and very consistent across the listening area. These can be very desirable qualities in a sound system where clarity and intelligibility are important. The downside to this optimization strategy is that it sometimes does not sound very natural. This is because with coherence optimized sound systems, the direct sound from the original source (i.e. a singer/performer on stage) has typically little to no impact on the audience, and so the audience perceives the sound as coming directly from the loudspeakers. If you’re close enough to the stage and the singer, and the loudspeakers are way off to the side or far overhead, it can be strange to see the actual source yet hear the sound come from somewhere else. In an arena or stadium setting, or at a rock concert where you likely wouldn’t hear much direct sound in the first place, this isn’t as big a problem. Sound designers are sometimes willing to accept a slightly unnatural sound if it means that they can solve the clarity and intelligibility problems that occur in the acoustic space.

[aside]While your loudspeakers might sit still for the whole show, the performers usually don’t.  Out Board’s TiMax tracker and soundhub delay matrix system use radar technology to track actors and performers around a stage in three dimensions, automating and adjusting the delay times to maintain precedence and deliver natural, realistic sound throughout the performance.[/aside]

Optimizing the sound system for precedence imaging is completely opposite to the coherence strategy. In this case, the goal is to increase the clarity and loudness of the sound system while maintaining a natural sound as much as possible. In other words, you want the audience to be able to hear and understand everything in the performance but you want them to think that what they are hearing is coming naturally from the performer instead of coming from loudspeakers in a sound system. In a precedence imaging sound system, each loudspeaker system behaves like an early reflection in an acoustic space. For this strategy to work, you want to maximize the overlap between the various loudspeaker systems. Each listener should be able to hear two or three loudspeaker systems from a single seat. The danger here is that these overlapping loudspeaker systems can easily comb filter in a way that will make the sound unpleasant or completely unintelligible. Using the precedence effect described in Chapter 4, you can manipulate the delay of each loudspeaker system so they arrive at the listener at least five milliseconds apart but no more than 30 milliseconds apart. The signals still comb filter, but in a way that our hearing system naturally compensates for. Once all of the loudspeakers are lined up, you’ll also want to delay the entire sound system back to the performer position on stage. As long as the natural sound from the performer arrives first, followed by a succession of similar sounds from the various loudspeaker systems each within this precedence timing window, you can get an increased volume and clarity as perceived by the listener while still maintaining the effect of a natural acoustic sound. If that natural sound is a priority, you can achieve acceptable results with this method, but you will sacrifice some of the additional clarity and intelligibility that comes with a coherent sound system.

Both of these optimization strategies are valid, and you’ll need to evaluate your situation in each case to decide which kind of optimized system best addresses the priorities of your situation. In either case, you need some sort of system processor to perform the EQ and delay functions for the loudspeaker systems. These processors usually take the form of a dedicated digital signal-processing unit with multiple audio inputs and outputs. These system processors typically require a separate computer for programming, but once the system has been programmed, the units perform quite reliably without any external control. Figure 8.38 shows an example of a programming interface for a system processor.

Figure 8.38 Programming interface for a digital system processor
Figure 8.38 Programming interface for a digital system processor

8.2.4.5 Multi-Channel Playback

Mid-Side can also be effective as a playback technique for delivering stereo sound to a large listening area. One of the limitations to stereo sound is that the effect relies on having the listener perfectly centered between the two loudspeakers. This is usually not a problem for a single person listening in a small living room. If you have more than one listener, such as in a public performance space, it can be difficult if not impossible to get all the listeners perfectly centered between the two loudspeakers. The listeners who are positioned to the left or right of the center line will not hear a stereo effect. Instead they will perceive most of the sound to be coming from whichever loudspeaker they are closest to. A more effective strategy would be to set up three loudspeakers. One would be your Mid loudspeaker and would be positioned in front of the listeners. The other two loudspeakers would be positioned directly on either side of the listeners as shown in Figure 8.39.

Figure 8.39 Mid Side loudspeaker setup
Figure 8.39 Mid Side loudspeaker setup

If you have an existing audio track that has been mixed in stereo, you can create a reverse Mid-Side matrix to convert the stereo information to a Mid-Side format. The Mid loudspeaker gets a L+R audio signal equivalent to summing the two stereo tracks to a single mono signal. The Side+ loudspeaker gets a L-R audio signal, equivalent to inverting the right channel polarity and summing the two channels to a mono signal. This will cancel out anything that is equal in the two channels essentially, removing all the Mid information. The Side- loudspeaker gets a R-L audio signal. Inverting the left channel polarity and summing to mono or simply inverting the Side+ signal can achieve this effect. The listeners in this scenario will all hear something similar to a stereo effect. The right channel stereo audio will cancel out in the air between the Mid and Side+ loudspeakers and the left channel stereo audio will cancel out in the air between the Mid and Side- loudspeakers. Because the Side+/- loudspeakers are directly to the side of the listeners, they will all hear this stereo effect regardless of whether they are directly in front of the MID loudspeaker. Just like Mid Side recording, the stereo image can be widened or narrowed as the balance between the Mid loudspeaker and Side loudspeakers is adjusted.

You don’t need to stop at just three loudspeakers. As long as you have more outputs on your playback system you can continue to add loudspeakers to your system to help you create more interesting soundscapes. The concept of Mid-Side playback illustrates an important concept. Having multiple loudspeakers doesn’t mean you have surround sound. If you play the same sound out of each loudspeaker, the precedence effect takes over and each listener will source the sound to the closest loudspeaker. To create surround sound effects, you need to have different sounds in each loudspeaker. The concept of Mid-Side playback demonstrates how you can modify a single sound to have different properties in three loudspeakers, but you could also have completely different sounds playing from each loudspeaker. For example, instead of having a single track of raindrops playing out of ten loudspeakers, you could have ten different recordings of water dripping onto various surfaces. This will create a much more realistic and immersive rain effect. You can also mimic acoustic effects using multiple loudspeakers. You could have the dry sound of a recorded musical instrument playing out of the loudspeakers closest to the stage and then play various reverberant or wet versions of the recording out of the loudspeakers near the walls. With multiple playback channels and multiple loudspeakers you can also create the effect of a sound moving around the room by automating volume changes over time.

8.2.4.6 Playback and Control

Sound playback has evolved greatly in the past decades, and it’s safe to say tape decks with multiple operators and reel changes are a thing of history.  While some small productions may still use CD players, MiniDiscs, or even MP3 players to playback their sound, it’s also safe to say that computer-based playback is the system of choice, especially in any professional production.  Already an integral part of the digital audio workflow, computers offer flexibility, scalability, predictability, and unprecedented control over audio playback.  Being able to consistently run a performance and reduce operator error is a huge advantage that computer playback provides.  Yet as simple as it may be to operate on the surface, the potential complexity behind a single click of a button can be enormous.

Popular computer sound playback software systems include SFX by Stage Research for Windows operating systems, and QLab by Figure 56 on a Mac.  These playback tools allow for many methods of control and automation, including sending and receiving MIDI commands, scripting, telnet, and more, allowing them to communicate with almost any other application or device.  These playback systems also allow you to use multiple audio outputs, sending sound out anywhere you want, be it a few specific locations, or the entire sound system. This is essential for creating immersive and dynamic surround effects. You’ll need a separate physical output channel from your computer audio interface for each loudspeaker location (or group of loudspeakers, depending on your routing) in your system that you want to control individually.

Controlling these systems can be as simple as using the mouse pointer on your computer to click a GO button.  Yet that single click could trigger layers and layers of sound and control cues, with specifically timed sequences that execute an entire automated scene change or special effect.  Theme parks use these kind of playback systems to automatically control an entire show or environment, including sound playback, lighting effects, mechanical automation, and any other special effects.   In these cases, sometimes the simple GO isn’t even triggered by a human operator, but by a timed script, making the entire playback and control a consistent and self-reliant process.  Using MIDI or Open Sound Control you can get into very complex control systems.  Other possible examples include using sensors built into scenery or costumes for actor control, as well as synchronizing sound, lighting, and projection systems to keep precisely timed sequences operating together and exactly on cue, such as a simulated lighting strike.  Outside of an actual performance, these control systems can benefit you as a designer by providing a means of wireless remote control from a laptop or tablet, allowing you to make changes to cues while listening from various locations in the theatre.

Using tools such as Max or PD, you can capture input from all kinds of sources such as cameras, mobile devices, or even video game controllers, and use that control data to generate MIDI commands to control sound playback.  You’ll always learn more actually doing it than simply reading about it, so included in this section are several exercises to get you going making your own custom control and sound playback systems.

[wpfilebase tag=file id=158 tpl=supplement /]

[wpfilebase tag=file id=180 tpl=supplement /]

Regardless of the path you’ve taken through our written material, at some point you need to start doing something with this information. We’ve looked at many concepts separately but in a real project you will apply several of these concepts all at the same time. Linked from this section are some suggestions for projects you can do that allow you the opportunity to apply what you’ve learned.We invite you to exercise your creativity in your chosen field of study or application, synthesizing your knowledge and imagination in the complex, fascinating, ubiquitous world of sound.

 

132/132