Table of Contents

Section 2.3 - Science, Mathematics, and Algorithms

2.3.1 Modeling Sound in Max

Max (distributed by Cycling ‘74 and formerly called Max/MSP/Jitter) is a software environment for manipulating digital audio and MIDI data. The interface for Max is at a high level of abstraction, allowing you to patch together sound objects by dragging icons for them to a window and linking them with patcher cords, similar to the way audio hardware is connected. If you aren't able to use Max, which is a commercial product, you can try substituting the freeware program Pure Data. We introduce you briefly to Pure Data in Section 2.3.2. In future chapters, we’ll limit our examples to Max because of its highly developed functionality, but PD is a viable free alternative that you might want to try.

Max can be used in two ways. First, it’s an excellent environment for experimenting with sound simply to understand it better. As you synthesize and analyze sound through built-in Max objects and functions, you develop a better understanding of the event-driven nature of MIDI versus the streaming data nature of digital audio, and you see more closely how these kinds of data are manipulated through transforms, effects processors, and the like. Secondly, Max is actually used in the sound industry. When higher level audio processing programs like Logic, Pro Tools, Reason, or Sonar don’t meet the specific needs of audio engineers, they can create specially-designed systems using Max.

Max is actually a combination of two components that can be made to work together. Max allows you to work with MIDI objects, and MSP is designed for digital audio objects. Since we won’t go into depth in MIDI until Chapter 6, we’ll just look at MSP for now.

Let’s try a simple example of MSP to get you started. Figure 2.38 shows an MSP patcher for adding three pure sine waves. A patcher – whether it has Max objects, MSP objects, or both – is a visual representation of a program that operates on sound. The objects are connected by patch cords. These cords run between the inlets and outlets of objects.

One of the first things you’ll probably want to do in MSP is create simple sine waves and listen to them. To be able to work on a patcher, you have to be in edit mode, as opposed to lock mode. You enter edit mode by clicking CTRL-E (or Apple-E) or by clicking the lock icon on the task bar. Once in edit mode, you can click on the + icon on the task bar, which gives you a menu of objects. (We refer you to the Max Help for details on inserting various objects.) The cycle~ object creates a sine wave of whatever frequency you specify.

Notice that MSP objects are distinguished from Max objects by the tilde at the end of their names. This reminds you that MSP objects send audio data streaming continuously through them, while Max objects typically are triggered by and trigger discrete events.

The speaker icon in Figure 2.24 represents the ezdac~ object, which stands for “easy digital to analog conversion,” and sends sound to the sound card to be played. A number object sends the desired frequency to each cycle~ object, which in turn sends the sine wave to a scope~ object (the sine wave graphs) for display. The three frequencies are summed with two consecutive +~ objects, and the resulting complex waveform is displayed.

Figure 2.24 MSP patcher for adding three sine waves

Figure 2.24 MSP patcher for adding three sine waves

To understand these objects and functions in more detail, you can right click on them to get an Inspector window, which shows the parameters of the objects. Figure 2.25 shows the inspector for the meter~ object, which looks like an LED and is to the left of the Audio On/Off switch in Figure 2.24. You can change the parameters in the Inspector as you wish. You also can right click on an object, go to the Help menu (Figure 2.26), and access an example patcher that helps you with the selected object. Figure 2.27 shows the Help patcher for the ezdac~ object. The patchers in the Help can be run as programs, opened in Edit mode, and copied and pasted into your own patchers.

Figure 2.25  meter~ Inspector in Max

Figure 2.25 meter~ Inspector in Max

Figure 2.26 Accessing ezdac~ Help in Max

Figure 2.26 Accessing ezdac~ Help in Max

Figure 2.27 ezdac~ Help patcher in Max

Figure 2.27 ezdac~ Help patcher in Max

When you create a patcher, you may want to define how it looks to the user in what is called presentation mode, a view in which you can hide some of the implementation details of the patcher to make its essential functionality clearer. The example patcher’s presentation mode is shown in Figure 2.28.

Figure 2.28 Presentation mode of Max patcher

Figure 2.28 Presentation mode of Max patcher

2.3.2 Modeling Sound Waves in Pure Data (PD)

Pure Data (aka PD) is a free alternative to Max developed by one of the originators of Max, Miller Puckette, and his collaborators.  Like Max, it is a graphical programming environment that links digital audio and MIDI.

The Max program to add sine waves is implemented in PD in Figure 2.29.  You can see that the interface is very similar to Max’s, although PD has fewer GUI components and no presentation mode.

Figure 2.29 Adding sound waves in Pure Data

Figure 2.29 Adding sound waves in Pure Data

2.3.3 Modeling Sound in MATLAB

It's easy to model and manipulate sound waves in MATLAB, a mathematical modeling program. If you learn just a few of MATLAB’s built-in functions, you can create sine waves that represent sounds of different frequencies, add them, plot the graphs, and listen to the resulting sounds. Working with sound in MATLAB helps you to understand the mathematics involved in digital audio processing. In this section, we'll introduce you to the basic functions that you can use for your work in digital sound. This will get you started with MATLAB, and you can explore further on your own. If you aren't able to use MATLAB, which is a commercial product, you can try substituting the freeware program Octave. We introduce you briefly to Octave in Section 2.3.5. In future chapters, we’ll limit our examples to MATLAB because it is widely used and has an extensive Signal Processing Toolbox that is extremely useful in sound processing. We suggest Octave as a free alternative that can accomplish some, but not all, of the examples in remaining chapters.

Before we begin working with MATLAB, let’s review the basic sine functions used to represent sound. In the equation y = Asin(2πfx + θ), frequency f is assumed to be measured in Hertz. An equivalent form of the sine function, and one that is often used, is expressed in terms of angular frequency, ω, measured in units of radians/s rather than Hertz. Since there are radians in a cycle, and Hz is cycles/s, the relationship between frequency in Hertz and angular frequency in radians/s is as follows:

Let f be the frequency of a sine wave in Hertz. Then the angular frequency, ω, in radians/s, is given by

\omega =2\pi f

 

Equation2.7

We can now give an alternative form for the sine function.

A single-frequency sound wave with angular frequency ω, amplitude , and A phase θ is represented by the sine function

 

y=A\sin \left ( \omega t+\theta \right )

Equation 2.8

In our examples below, we show the frequency in Hertz, but you should be aware of these two equivalent forms of the sine function.   MATLAB’s sine function expects angular frequency in Hertz, so f must be multiplied by .

Now let’s look at how we can model sounds with sine functions in MATLAB. Middle C on a piano keyboard has a frequency of approximately 262 Hz. To create a sine wave in MATLAB at this frequency and plot the graph, we can use the fplot function as follows:

fplot('sin(262*2*pi*t)', [0, 0.05, -1.5, 1.5]);

The graph in Figure 2.30 pops open when you type in the above command and hit Enter. Notice that the function you want to graph is enclosed in single quotes. Also, notice that the constant π is represented as pi in MATLAB. The portion in square brackets indicates the limits of the horizontal and vertical axes. The horizontal axis goes from 0 to 0.05, and the vertical axis goes from –1.5 to 1.5.

Figure 2.30 262 Hz sine wave

Figure 2.30 262 Hz sine wave

If we want to change the amplitude of our sine wave, we can insert a value for A. If A > 1, we may have to alter the range of the vertical axis to accommodate the higher amplitude, as in

fplot('2*sin(262*2*pi*t)', [0, 0.05, -2.5, 2.5]);

After multiplying by A=2 in the statement above, the top of the sine wave goes to 2 rather than 1.

To change the phase of the sine wave, we add a value θ. Phase is essentially a relationship between two sine waves with the same frequency. When we add θ to the sine wave, we are creating a sine wave with a phase offset of θ compared to a sine wave with phase offset of 0. We can show this by graphing both sine waves on the same graph. To do so, we graph the first function with the command

fplot('2*sin(262*2*pi*t)', [0, 0.05, -2.5, 2.5]);

We then type

hold on

This will cause all future graphs to be drawn on the currently open figure. Thus, if we type

fplot('2*sin(262*2*pi*t+pi)', [0, 0.05, -2.5, 2.5]);

we have two phase-offset graphs on the same plot. In Figure 2.31, the 0-phase-offset sine wave is in red and the 180o phase offset sine wave is in blue.

Figure 2.31 Two sine waves, one offset 180 degrees from the other

Figure 2.31 Two sine waves, one offset 180° from the other

Notice that the offset is given in units of radians rather than degrees, 180o being equal to radians.

To change the frequency, we change ω. For example, changing ω to 440*2*pi gives us a graph of the note A above middle C on a keyboard.

fplot('sin(440*2*pi*t)', [0, 0.05, -1.5, 1.5]);

The above command gives this graph:

Figure 2.32 440 Hz sine wave

Figure 2.32 440 Hz sine wave

Then with

fplot('sin(262*2*pi*t)', [0, 0.05, -1.5, 1.5], 'red');

hold on

we get this figure:

Figure 2.33 Two sine waves plotted on same graph

Figure 2.33 Two sine waves plotted on same graph

The 262 Hz sine wave in the graph is red to differentiate it from the blue 440 Hz wave.

The last parameter in the fplot function causes the graph to be plotted in red. Changing the color or line width also can be done by choosing Edit/Figure Properties on the figure, selecting the sine wave, and changing its properties.

We also can add sine waves to create more complex waves, as we did using Adobe Audition in Section 2.2.2. This is a simple matter of adding the functions and graphing the sum, as shown below.

figure

fplot('sin(262*2*pi*t)+sin(440*2*pi*t)', [0, 0.05, -2.5, 2.5]);

First, we type figure to open a new empty figure (so that our new graph is not overlaid on the currently open figure). We then graph the sum of the sine waves for the notes C and A. The result is this:

Figure 2.34  The sum of two sine waves

Figure 2.34 The sum of two sine waves

We've used the fplot function in these examples. This function makes it appear as if the graph of the sine function is continuous. Of course, MATLAB can't really graph a continuous list of numbers, which would be infinite in length. The name MATLAB, in fact, is an abbreviation of "matrix laboratory." MATLAB works with arrays and matrices. In Chapter 5, we'll explain how sound is digitized such that a sound file is just an array of numbers. The plot function is the best one to use in MATLAB to graph these values. Here's how this works.

First, you have to declare an array of values to use as input to a sine function. Let's say that you want one second of digital audio at a sampling rate of 44,100 Hz (i.e., samples/s) (a standard sampling rate). Let's set the values of variables for sampling rate sr and number of seconds s, just to remind you for future reference of the relationship between the two.

sr = 44100;
s = 1;

Now, to give yourself an array of time values across which you can evaluate the sine function, you do the following:

t = linspace(0,s, sr*s);

This creates an array of sr * s values evenly-spaced between and including the endpoints. Note that when you don't put a semi-colon after a command, the result of the command is displayed on the screen. Thus, without a semi-colon above, you'd see the 44,100 values scroll in front of you.

To evaluate the sine function across these values, you type

y = sin(2*pi*262*t);

One statement in MATLAB can cause an operation to be done on every element of an array. In this example, y = sin(2*pi*262*t) takes the sine on each element of array t and stores the result in array y. To plot the sine wave, you type

plot(t,y);

Time is on the horizontal axis, between 0 and 1 second. Amplitude of the sound wave is on the vertical axis, scaled to values between -1 and 1. The graph is too dense for you to see the wave properly. There are three ways you can zoom in. One is by choosing Axes Properties from the graph's Edit menu and then resetting the range of the horizontal axis. The second way is to type an axis command like the following:

axis([0 0.1 -2 2]);

This displays only the first 1/10 of a second on the horizontal axis, with a range of -2 to 2 on the vertical axis so you can see the shape of the wave better.

You can also ask for a plot of a subset of the points, as follows:

plot(t(1:1000),y(1:1000));

The above command plots only the first 1000 points from the sine function. Notice that the length of the two arrays must be the same for the plot function, and that numbers representing array indices must be positive integers. In general, if you have an array t of values and want to look at only the ith to the jth values, use t(i:j).

An advantage of generating an array of sample values from the sine function is that with that array, you actually can hear the sound. When you send the array to the wavplay or sound function, you can verify that you've generated one second's worth of the frequency you wanted, middle C. You do this with

wavplay(y, sr);

(which works on Windows only) or, more generally,

sound(y, sr);

The first parameter is an array of sound samples. The second parameter is the sampling rate, which lets the system know how many samples to play per second.

MATLAB has other built-in functions for generating waves of special shapes. We'll go back to using fplot for these. For example, we can generate square, sawtooth, and triangular waves with the three commands given below:

fplot('square(t)',[0,10*pi,-1.5,1.5]);
Figure 2.35 Square wave

Figure 2.35 Square wave

fplot('sawtooth(t)',[0,10*pi]);
Figure 2.36 Sawtooth wave

Figure 2.36 Sawtooth wave

fplot('2*pulstran(t,[0:10],''tripuls'')-1',[0,10]);
Figure 2.37 Triangle wave

Figure 2.37 Triangle wave

(Notice that the tripuls parameter is surrounded by two single quotes on each side.)

This section is intended only to introduce you to the basics of MATLAB for sound manipulation, and we leave it to you to investigate the above commands further. MATLAB has an extensive Help feature which gives you information on the built-in functions.

Each of the functions above can be created “from scratch” if you understand the nature of the non-sinusoidal waves. The ideal square wave is constructed from an infinite sum of odd-numbered harmonics of diminishing amplitude. More precisely, if is the fundamental frequency of the non-sinusoidal wave to be created, then a square wave is constructed by the following infinite summation:

Let f be a fundamental frequency. Then a square wave created from this fundamental frequency is defined by the infinite summation

 

\sum_{n=0}^{\infty }\frac{1}{\left ( 2n+1 \right )}\sin \left ( 2\pi \left ( 2n+1 \right )ft \right )

Equation 2.9

Of course, we can’t do an infinite summation in MATLAB, but we can observe how the graph of the function becomes increasingly square as we add more terms in the summation. To create the first four terms and plot the resulting sum, we can do

f1 = 'sin(2*pi*262*t) + sin(2*pi*262*3*t)/3 + sin(2*pi*262*5*t)/5 + sin(2*pi*262*7*t)/7';

fplot(f1, [0 0.01 -1 1]);

This gives the wave in Figure 2.38.

Figure 2.38  Creating a square wave by adding four sine functions

Figure 2.38 Creating a square wave by adding four sine functions

You can see that it is beginning to get square but has many ripples on the top. Adding four more terms gives further refinement to the square wave, as illustrated in Figure 2.39:

Figure 2.39 Creating a square wave by adding eight sine functions

Figure 2.39 Creating a square wave by adding eight sine functions

Creating the wave in this brute force manner is tedious. We can make it easier by using MATLAB's sum function and its ability to do operations on entire arrays. For example, you can plot a 262 Hz square wave using 51 terms with the following MATLAB command:

fplot('sum(sin(2*pi*262*([1:2:101])*t)./([1:2:101]))',[0 0.005 -1 1])

The array notation [1:2:101] creates an array of 51 points spaced two units apart – in effect, including the odd harmonic frequencies in the summation and dividing by the odd number. The sum function adds up these frequency components. The function is graphed over the points 0 to 0.005 on the horizontal axis and –1 to 1 on the vertical axis. The ./ operation causes the division to be executed element by element across the two arrays.

The sawtooth wave is an infinite sum of all harmonic frequencies with diminishing amplitudes, as in the following equation:

Let f be a fundamental frequency. Then a sawtooth wave created from this fundamental frequency is defined by the infinite summation

 

\frac{2}{\pi }\sum_{n=1}^{\infty }\frac{\sin \left ( 2\pi n\, ft \right )}{n}

2/π is a scaling factor to ensure that the result of the summation is in the range of -1 to 1.

The sawtooth wave can be plotted by the following MATLAB command:

fplot('-sum((sin(2*pi*262*([1:100])*t)./([1:100])))',[0 0.005 -2 2])

The triangle wave is an infinite sum of odd-numbered harmonics that alternate in their signs, as follows:

Let f be a fundamental frequency. Then a triangle wave created from this fundamental frequency is defined by the infinite summation

 

\frac{8}{\pi ^{2}}\sum_{n=0}^{\infty }\left ( \frac{sin\left ( 2\pi \left ( 4n+1 \right )ft \right )}{\left ( 4n+1 \right )^{2}} \right )-\left ( \frac{\sin \left ( 2\pi \left ( 4n+3 \right )ft \right )}{\left ( 4n+3 \right )^{2}} \right )

Equation 2.11

8/π^2 is a scaling factor to ensure that the result of the summation is in the range of -1 to 1.

We leave the creation of the triangle wave as a MATLAB exercise.

If you actually want to hear one of these waves, you can generate the array of audio samples with


s = 1;

sr = 44100;

t = linspace(0, s, sr*s);

y = sawtooth(262*2*pi*t);

and then play the wave with

sound(y, sr);

It's informative to create and listen to square, sawtooth, and triangle waves of various amplitudes and frequencies. This gives you some insight into how these waves can be used in sound synthesizers to mimic the sounds of various instruments. We'll cover this in more detail in Chapter 6.

In this chapter, all our MATLAB examples are done by means of expressions that are evaluated directly from MATLAB's command line. Another way to approach the problems is to write programs in MATLAB's scripting language. We leave it to the reader to explore MATLAB script programming, and we'll have examples in later chapters.

2.3.4 Reading and Writing WAV Files in MATLAB

In the previous sections, we generated sine waves to generate sound data and manipulate it in various ways.  This is useful for understanding basic concepts regarding sound.  However, in practice you have real-world sounds that have been captured and stored in digital form.  Let’s look now at how we can read audio files in MATLAB and perform operations on them.

We've borrowed a short WAV file from Adobe Audition's demo files, reducing it to mono rather than stereo.  MATLAB's wavread function imports WAV files, as follows:

y = wavread('HornsE04Mono.wav');

This reads an array of audio samples into y, assuming that the file is in the current folder of MATLAB.  (You can set this through the Current Folder window at the top of MATLAB.)  If you want to know the sampling rate and bit depth (the number of bits per sample) of the audio file, you can get this information with

[y, sr, b] = wavread('HornsE04Mono.wav');

sr now contains the sampling rate and b contains the bit depth.  The Workspace window shows you the values in these variables.

Figure 2.40 Workspace in MATLAB showing results of wavread function

Figure 2.40 Workspace in MATLAB showing results of wavread function

You can play the sound with

sound(y, sr);

Once you've read in a WAV file and have it stored in an array, you can easily do mathematical operations on it.  For example, you can make it quieter by multiplying by a number less than 1, as in

y = y * 0.5;

You can also write out the new form of the sound file, as in

wavwrite(y, 'HornsNew.wav');

2.3.5 Modeling Sound in Octave

Octave is a freeware, open-source version of MATLAB distributed by GNU.  It has many but not all of the functions of MATLAB.  There are versions of Octave for Linux, UNIX, Windows, and Mac OS X.

If you try to do the above exercise in Octave, most of the functions are the same.  The fplot function is the same in Octave as in MATLAB except that for colors, you must put a digit from 0 to 7 in single quotes rather than use the name of the color.  The linspace function is the same in Octave as in MATLAB.  To play the sound, you need to use the playsound function rather than wavplay.  You also can use a wavwrite function (which exists in both MATLAB and Octave) to write the audio data to an external file.  Then you can play the sound with your favorite media player.

There is no square or sawtooth function in Octave.  To create your own sawtooth, square, or triangle wave in Octave, you can use the Octave programs below.  You might want to consider why the mathematical shortcuts in these programs produced the desired waveforms.


function saw = sawtooth(freq, samplerate)

x = [0:samplerate];
wavelength = samplerate/freq;
saw = 2*mod(x, wavelength)/wavelength-1;

end

Program 2.1 Sawtooth wave in Octave


function sqwave = squarewave(freq, samplerate)

x = [0:samplerate];
wavelength = samplerate/freq;
saw = 2*mod(x, wavelength)/wavelength-1; %start with sawtooth wave
sawzeros = (saw == zeros(size(saw))); %elminates division by zero in next step
sqwave = -abs(saw)./(saw+sawzeros); %./ for element-by-element division

end

Program 2.2 Square wave in Octave


function triwave = trianglewave(freq, samplerate)

x = [0:samplerate];
wavelength = samplerate/freq;
saw = 2*mod(x, wavelength)/wavelength-1; %start with sawtooth wave
tri = 2*abs(saw)-1;

end

Program 2.3 Triangle wave in Octave

2.3.6 Transforming from One Domain to Another

In Section 2.2.3, we showed how sound can be represented graphically in two ways.  In the waveform view, time is on the horizontal axis and amplitude of the sound wave is on the vertical axis.  In the frequency analysis view, frequency is on the horizontal axis and the magnitude of the frequency component is on the vertical axis.  The waveform view represents sound in the time domain. The frequency analysis view represents sound in the frequency domain.  (See Figure 2.18 and Figure 2.19.)  Whether sound is represented in the time or the frequency domain, it's just a list of numbers.  The information is essentially the same – it's just that the way we look at it is different.

The one-dimensional Fourier transform is function that maps real to the complex numbers, given by the equation below.  It can be used to transform audio data from the time to the frequency domain.

F\left ( n \right )=\int_{-\infty }^{\infty }f\left ( k \right )e^{-i2\pi nk}dk\: where\: i=\sqrt{-1}

Equation 2.12 Fourier transform (continuous)

Sometimes it's more convenient to represent sound data one way as opposed to another because it's easier to manipulate it in a certain domain.  For example, in the time domain we can easily change the amplitude of the sound by multiplying each amplitude by a number.  On the other hand, it may be easier to eliminate certain frequencies or change the relative magnitudes of frequencies if we have the data represented in the frequency domain.

2.3.7 The Discrete Fourier Transfer and its Inverse

To be applied to discrete audio data, the Fourier transform must be rendered in a discrete form. This is given in the equation for the discrete Fourier transform below.

F_{n}=\frac{1}{N}\left ( \sum_{k-0}^{N-1}f_{k}\cos \frac{2\pi nk}{N}-if_{k}\sin \frac{2\pi nk}{N} \right )=\frac{1}{N}\left ( \sum_{k=0}^{N-1}f_{k}e^{\frac{-i2\pi kn}{N}} \right )\mathrm{where}\; i=\sqrt{-1}

Equation 2.13 Discrete Fourier transform

Aside: 

The second form of the discrete Fourier transform given in Equation 2.1, \frac{1}{N}\left (\sum_{k=0}^{N-1}f_{k}e^{\frac{-i2\pi kn}{N}} \right ), uses the constant e.  It is derivable from the first by application of Euler’s identify, e^{i2\pi kn}=\cos \left ( 2\pi kn \right )+i\sin \left ( 2\pi kn \right ). To see the derivation, see (Burg 2008).

Notice that we’ve switched from the function notation used in Equation 2.12 ( F\left ( n \right ) and F\left ( k \right )) to array index notation in Equation 2.13 ( F_{n} and f_{k}) to emphasize that we are dealing with an array of discrete audio sample points in the time domain. Casting this equation as an algorithm (Algorithm 2.1) helps us to see how we could turn it into a computer program where the summation becomes a loop nested inside the outer for loop.

/*Input:

f, an array of digitized audio samples
N, the number of samples in the array
Note:  i=\sqrt{-1}

Output:

F, an array of complex numbers which give the frequency components of the sound given by f */

for (n = 0 to N - 1 )

F_{n}=\frac{1}{N}\left ( \sum \begin{matrix}N-1\\k=0 \end{matrix} f_{k}\, cos\frac{2\pi nk}{N}-if_{k}\, sin\frac{2\pi nk}{N}\right )

Algorithm 2.1 Discrete Fourier transform

Each time through the loop, the magnitude and phase of the nth frequency component are computed, F_{n}. Each F_{n} is a complex number with a cosine and sine term, the sine term having the factor i in it.

We assume that you're familiar with complex numbers, but if not, a short introduction should be enough so that you can work with the Fourier algorithm.

A complex number takes the form a+bi, where i=\sqrt{-1}. Thus,  cos\frac{2\pi nk}{N}-if_{k}\: sin\left ( \frac{2\pi nk}{N} \right ) is a complex number. In this case, a is replaced with cos\frac{2\pi nk}{N} and b with -f_{k}\: sin\left ( \frac{2\pi nk}{N} \right ).  Handling the complex numbers in an implementation of the Fourier transform is not difficult. Although i is an imaginary number, \sqrt{-1}, and you might wonder how you’re supposed to do computation with it, you really don’t have to do anything with it at all except assume it’s there. The summation in the formula can be replaced by a loop that goes from 0 through N-1.   Each time through that loop, you add another term from the summation into an accumulating total. You can do this separately for the cosine and sine parts, setting aside i. Also, in object-oriented programming languages, you may have a Complex number class to do complex number calculations for you.

The result of the Fourier transform is a list of complex numbers F, each of the form a+bi, where the magnitude of the frequency component is equal to \sqrt{a^{2}+b^{2}}.

The inverse Fourier transform transforms audio data from the frequency domain to the time domain. The inverse discrete Fourier transform is given in Algorithm 6.2.

/*Input:

F, an array of complex numbers representing audio data in the frequency domain, the elements represented by the coefficients of their real and imaginary parts, a and b respectively N, the number of samples in the array

Note: i=\sqrt{-1}

Output: f, an array of audio samples in the time domain*/

for (n=0 to N-1)

f_{k}=\sum \begin{matrix}N-1\\ n=0\end{matrix}\left ( a_{n}cos\frac{2\pi nk}{N}+ib_{n}sin\frac{2\pi nk}{N} \right )

Algorithm 2.2 Inverse discrete Fourier transform

2.3.8 The Fast Fourier Transform (FFT)

If you know how to program, it's not difficult to write your own discrete Fourier transform and its inverse through a literal implementation of the equations above.  However, the "literal" implementation of the transform is computationally expensive.  The equation in Algorithm 2.1 has to be applied N times, where N is the number of audio samples.  The equation itself has a summation that goes over N elements.  Thus, the discrete Fourier transform takes on the order of  N^{2} operations.

The fast Fourier transform (FFT) is a more efficient implementation of the Fourier transform that does on the order of  N\ast log_{2}N operations.  The algorithm is made more efficient by eliminating duplicate mathematical operations.  The FFT is the version of the Fourier transform that you'll often see in audio software and applications.  For example, Adobe Audition uses the FFT to generate its frequency analysis view, as shown in Figure 2.41.

Figure 2.41 Frequency analysis view (left) and waveform view (right) in Adobe Audition, showing audio dat in the frequency domain and time domain, respectively

Figure 2.41 Frequency analysis view (left) and waveform view (right) in Adobe Audition, showing audio data in the frequency domain and time domain, respectively

2.3.9 Applying the Fourier Transform in MATLAB

Generally when you work with digital audio, you don't have to implement your own FFT. Efficient implementations already exist in many programming language libraries. For example, MATLAB has FFT and inverse FFT functions, fft and ifft, respectively. We can use these to experiment and generate graphs of sound data in the frequency domain. First, let's use sine functions to generate arrays of numbers that simulate single-pitch sounds. We'll make three one-second long sounds using the standard sampling rate for CD quality audio, 44,100 samples per second. First, we generate an array of sr*s numbers across which we can evaluate sine functions, putting this array in the variable t.

sr = 44100; %sr is sampling rate
s = 1; %s is number of seconds
t = linspace(0, s, sr*s);

Now we use the array t as input to sine functions at three different frequencies and phases, creating the note A at three different octaves (110 Hz, 220 Hz, and 440 Hz).

x = cos(2*pi*110*t);
y = cos(2*pi*220*t + pi/3);
z = cos(2*pi*440*t + pi/6);

x, y, and z are arrays of numbers that can be used as audio samples. pi/3 and pi/6 represent phase shifts for the 220 Hz and 440 Hz waves, to make our phase response graph more interesting. The figures can be displayed with the following:

figure;
plot(t,x);
axis([0 0.05 -1.5 1.5]);
title('x');
figure;
plot(t,y);
axis([0 0.05 -1.5 1.5]);
title('y');
figure;
plot(t,z);
axis([0 0.05 -1.5 1.5]);
title('z');

We look at only the first 0.05 seconds of the waveforms in order to see their shape better. You can see the phase shifts in the figures below. The second and third waves don't start at 0 on the vertical axis.

Figure 2.42 110 Hz, no phase offset

Figure 2.42 110 Hz, no phase offset

Figure 2.43 220 Hz, π /3 phase offset

Figure 2.43 220 Hz, π /3 phase offset

Figure 2.44 440 Hz, π/6 phase offset

Figure 2.44 440 Hz, π/6 phase offset

Now we add the three sine waves to create a composite wave that has three frequency components at three different phases.

a = (x + y + z)/3;

Notice that we divide the summed sound waves by three so that the sound doesn’t clip. You can graph the three-component sound wave with the following:

figure;
plot(t, a);
axis([0 0.05 -1.5 1.5]);
title('a = x + y + z');
Figure 2.45 Time domain data for a 3-component waveform

Figure 2.45 Time domain data for a 3-component waveform

This is a graph of the sound wave in the time domain. You could call it an impulse response graph, although when you’re looking at a sound file like this, you usually just think of it as “sound data in the time domain.” The term “impulse response” is used more commonly for time domain filters, as we’ll see in Chapter 7. You might want to play the sound to be sure you have what you think you have. The sound function requires that you tell it the number of samples it should play per second, which for our simulation is 44,100.

sound(a, sr);

When you play the sound file and listen carefully, you can hear that it has three tones. MATLAB's Fourier transform (fft) returns an array of double complex values (double-precision complex numbers) that represent the magnitudes and phases of the frequency components.

fftdata = fft(a);

In MATLAB's workspace window, fftdata values are labeled as type double, giving the impression that they are real numbers, but this is not the case. In fact, the Fourier transform produces complex numbers, which you can verify by trying to plot them in MATLAB. The magnitudes of the complex numbers are given in the Min and Max fields, which is computed by the abs function. For a complex number a+bi, the magnitude is computed as \sqrt{a^{2}+b^{2}}. MATLAB does this computation and yields the magnitude.

Figure 2.46 Workspace in MATLAB showing values and types of variables currently in memory

Figure 2.46 Workspace in MATLAB showing values and types of variables currently in memory

To plot the results of the fft function such that the values represent the magnitudes of the frequency components, we first apply the abs function to fftdata.

fftmag = abs(fftdata);

Let's plot the frequency components to be sure we have what we think we have. For a sampling rate of sr on an array of sample values of size N, the Fourier transform returns the magnitudes of N/2 frequency components evenly spaced between 0 and sr/2 Hz. (We'll explain this completely in Chapter 5.)   Thus, we want to display frequencies between 0 and sr/2 on the horizontal axis, and only the first sr/2 values from the fftmag vector.

figure;
freqs = [0: (sr/2)-1];
plot(freqs, fftmag(1:sr/2));

Aside:  If we would zoom in more closely at each of these spikes at frequencies 110, 220, and 440 Hz, we would see that they are not perfectly horizontal lines.  The "imperfect" results of the FFT will be discussed later in the sections on FFT windows and windowing functions.
When you do this, you'll see that all the frequency components are way over on the left side of the graph. Since we know our frequency components should be 110 Hz, 220 Hz, and 440 Hz, we might as well look at only the first, say, 600 frequency components so that we can see the results better. One way to zoom in on the frequency response graph is to use the zoom tool in the graph window, or you can reset the axis properties in the command window, as follows.

axis([0 600 0 8000]);

This yields the frequency response graph for our composite wave, which shows the three frequency components.

Figure 2.47 Frequency response graph for a 3-component wave

Figure 2.47 Frequency response graph for a 3-component wave

To get the phase response graph, we need to extract the phase information from the fftdata. This is done with the angle function. We leave that as an exercise. Let's try the Fourier transform on a more complex sound wave – a sound file that we read in.

y = wavread('HornsE04Mono.wav');

As before, you can get the Fourier transform with the fft function.

fftdata = fft(y);

You can then get the magnitudes of the frequency components and generate a frequency response graph from this.

fftmag = abs(fftdata);
figure;
freqs = [0:(sr/2)-1];
plot(freqs, fftmag(1:sr/2));
axis([0 sr/2 0 4500]);
title('frequency response for HornsE04Mono.wav'); 

Let's zoom in on frequencies up to 5000 Hz.

axis([0 5000 0 4500]);

The graph below is generated.

Figure 2.48 Frequency response for HornsE04Mono.wav

Figure 2.48 Frequency response for HornsE04Mono.wav

The inverse Fourier transform gives us back our original sound data in the time domain.

ynew = ifft(fftdata);

If you compare y with ynew, you'll see that the inverse Fourier transform has recaptured the original sound data.

2.3.10 Windowing the FFT

When we applied the Fourier transform in MATLAB in Section 2.3.9, we didn't specify a window size.  Thus, we were applying the FFT to the entire piece of audio. If you listen to the WAV file HornsE04Mono.wav, a three second clip, you'll first hear some low tubas and them some higher trumpets. Our graph of the FFT shows frequency components up to and beyond 5000 Hz, which reflects the sounds in the three seconds. What if we do the FFT on just the first second (44100 samples) of this WAV file, as follows? The resulting frequency components are shown in Figure 2.49.

y = wavread('HornsE04Mono.wav');
sr = 44100;
freqs = [0:(sr/2)-1];
ybegin = y(1:44100);
fftdata2 = fft(ybegin);
fftdata2 = fftdata2(1:22050);
plot(freqs, abs(fftdata2));
axis([0 5000 0 4500]);
Figure 2.49 Frequency components of first second of HornsE04Mono.wav

Figure 2.49 Frequency components of first second of HornsE04Mono.wav

What we've done is focus on one short window of time in applying the FFT. An FFT window is a contiguous segment of audio samples on which the transform is applied. If you consider the nature of sound and music, you'll understand why applying the transform to relatively small windows makes sense. In many of our examples in this book, we generate segments of sound that consist of one or more frequency components that do not change over time, like a single pitch note or a single chord being played without change. These sounds are good for experimenting with the mathematics of digital audio, but they aren't representative of the music or sounds in our environment, in which the frequencies change constantly. The WAV file HornsE04Mono.wav serves as a good example. The clip is only three seconds long, but the first second is very different in frequencies (the pitches of tubas) from the last two seconds (the pitches of trumpets). When we do the FFT on the entire three seconds, we get a kind of "blurred" view of the frequency components, because the music actually changes over the three second period. It makes more sense to look at small segments of time. This is the purpose of the FFT window.

Figure 2.50 shows an example of how FFT window sizes are used in audio processing programs. Notice the drop down menu, which gives you a choice of FFT sizes ranging from 32 to 65536 samples. The FFT window size is typically a power of 2. If your sampling rate is 44,100 samples per second, then a window size of 32 samples is about 0.0007 s, and a window size of 65536 is about 1.486 s.

There's a tradeoff in the choice of window size. A small window focuses on the frequencies present in the sound over a short period of time. However, as mentioned earlier, the number of frequency components yielded by an FFT of size N is N/2. Thus, for a window size of, say, 128, only 64 frequency bands are output, these bands spread over the frequencies from 0 Hz to sr/2 Hz where sr is the sampling rate. (See Chapter 5.) For a window size of 65536, 37768 frequency bands are output, which seems like a good thing, except that with the large window size, the FFT is not isolating a short moment of time. A window size of around 2048 usually gives good results. If you set the size to 2048 and play the piece of music loaded into Audition, you'll see the frequencies in the frequency analysis view bounce up and down, reflecting the changing frequencies in the music as time pass.

Figure 2.50 Choice of FFT window size in Adobe Audition

Figure 2.50 Choice of FFT window size in Adobe Audition

2.3.11 Windowing Functions to Eliminate Spectral Leakage

In addition to choosing the FFT window size, audio processing programs often let you choose from a number of windowing functions. The purpose of an FFT windowing function is to smooth out the discontinuities that result from applying the FFT to segments (i.e., windows) of audio data. A simplifying assumption for the FFT is that each windowed segment of audio data contains an integral number of cycles, this cycle repeating throughout the audio. This, of course, is not generally the case. If it were the case – that is, if the window ended exactly where the cycle ended – then the end of the cycle would be at exactly the same amplitude as the beginning. The beginning and end would "match up." The actual discontinuity between the end of a window and its beginning is interpreted by the FFT as a jump from one level to another, as shown in Figure 2.51.   (In this figure, we've cut and pasted a portion from the beginning of the window to its end to show that the ends don't match up.)

Figure 2.51 Discontinuity between the end of a window and its beginning

Figure 2.51 Discontinuity between the end of a window and its beginning

In the output of the FFT, the discontinuity between the ends and the beginnings of the windows manifests itself as frequency components that don't really exist in audio – called spurious frequencies, or spectral leakage. You can see the spectral leakage Figure 2.41. Although the audio signal actually contains only one frequency at 880 Hz, the frequency analysis view indicates that there is a small amount of other frequencies across the audible spectrum.

In order to smooth over this discontinuity and thereby reduce the amount of spectral leakage, the windowing functions effectively taper the ends of the segments to 0 so that they connect from beginning to end. The drop-down menu to the left of the FFT size menu in Audition is where you choose the windowing function. In Figure 2.50, the Hanning function is chosen. Four commonly-used windowing functions are given in the table below.

Figure 2.52 Windowing functions

Figure 2.52 Windowing functions

Windowing functions are easy to apply. The segment of audio data being transformed is simply multiplied by the windowing function before the transform is applied. In MATLAB, you can accomplish this with vector multiplication, as shown in the commands below.

y = wavread('HornsE04Mono.wav');
sr = 44100; %sampling rate
w = 2048; %window size
T = w/sr; %period
% t is an array of times at which the hamming function is evaluated
t = linspace(0, 1, 44100);
twindow = t(1:2048); %first 2048 elements of t
% Create the values for the hamming function, stored in vector called hamming
hamming = 0.54 - 0.46 * cos((2 * pi * twindow)/T);
plot(hamming);
title('Hamming');

The Hamming function is shown in

Figure 2.53 Hamming windowing function

Figure 2.53 Hamming windowing function

yshort = y(1:2048); %first 2048 samples from sound file
%Multiply the audio values in the window by the Hamming function values,
% using element by element multiplication with .*.
% first convert hamming from a column vector to a row vector
ywindowed = hamming .* yshort;
figure;
plot(yshort);
title('First 2048 samples of audio data');
figure;
plot(ywindowed);
title('First 2048 samples of audio data, tapered by windowing function');

Before the Hamming function is applied, the first 2048 samples of audio data look like this:

Figure 2.54 Audio data

Figure 2.54 Audio data

After the Hamming function is applied, the audio data look like this:

Figure 2.55 Audio data after application of Hamming windowing function

Figure 2.55 Audio data after application of Hamming windowing function

Notice that the ends of the segment are tapered toward 0.

Figure 2.56 compares the FFT results with no windowing function vs. with the Hamming windowing function applied. The windowing function eliminates some of the high frequency components that are caused by spectral leakage.

figure
plot(abs(fft(yshort)));
axis([0 300 0 60]);
hold on;
plot(abs(fft(ywindowed)),'r');
Figure 2.56 Comparing FFT results with and without windowing function

Figure 2.56 Comparing FFT results with and without windowing function

2.3.12 Modeling Sound in C++ under Linux

If you want to work at an even lower level of abstraction, a good environment for experimentation is the Linux operating system using “from scratch” programs written in C++. In our first example C++ sound program, we show you how to create sound waves of a given frequency, add frequency components to get a complex wave, and play the sounds via the sound device. This program is another implementation of the exercises in Max and MATLAB in Sections 2.3.1 and 2.3.3.   The C++ program is given in Program 2.4.  

Aside:  In this example program, 8 bits are used to store each audio sample.  That is, the bit depth is 8. The sound library also allows a bit depth of 16.  The concept of bit depth will be explained in detail in Chapter 5.

.

//This program uses the OSS library.
#include <sys/ioctl.h> //for ioctl()
#include <math.h> //sin(), floor(), and pow()
#include <stdio.h> //perror
#include <fcntl.h> //open, O_WRONLY
#include <linux/soundcard.h> //SOUND_PCM*
#include <iostream>
#include <unistd.h>
using namespace std;

#define TYPE char
#define LENGTH 1 //number of seconds per frequency
#define RATE 44100 //sampling rate
#define SIZE sizeof(TYPE) //size of sample, in bytes
#define CHANNELS 1 //number of audio channels
#define PI 3.14159
#define NUM_FREQS 3 //total number of frequencies
#define BUFFSIZE (int) (NUM_FREQS*LENGTH*RATE*SIZE*CHANNELS) //bytes sent to audio device
#define ARRAYSIZE (int) (NUM_FREQS*LENGTH*RATE*CHANNELS) //total number of samples
#define SAMPLE_MAX (pow(2,SIZE*8 - 1) - 1) 

void writeToSoundDevice(TYPE buf[], int deviceID) {
	int status;
	status = write(deviceID, buf, BUFFSIZE);
	if (status != BUFFSIZE)
		perror("Wrote wrong number of bytes\n");
	status = ioctl(deviceID, SNDCTL_DSP_SYNC, 0);
	if (status == -1)
		perror("SNDCTL_DSP_SYNC failed\n");
}

int main() {
	int deviceID, arg, status, f, t, a, i;
	TYPE buf[ARRAYSIZE];
	deviceID = open("/dev/dsp", O_WRONLY, 0);
	if (deviceID < 0)
		perror("Opening /dev/dsp failed\n");
// working
	arg = SIZE * 8;
	status = ioctl(deviceID, SNDCTL_DSP_SETFMT, &arg);
	if (status == -1)
		perror("Unable to set sample size\n");
	arg = CHANNELS;
	status = ioctl(deviceID, SNDCTL_DSP_CHANNELS, &arg);
	if (status == -1)
		perror("Unable to set number of channels\n");
	arg = RATE;
	status = ioctl(deviceID, SNDCTL_DSP_SPEED, &arg);
	if (status == -1)
		perror("Unable to set sampling rate\n");
	a = SAMPLE_MAX;
	for (i = 0; i < NUM_FREQS; ++i) {
		switch (i) {
			case 0:
				f = 262;
				break;
			case 1:
				f = 330;
				break;
			case 2:
				f = 392;
				break;
		}
		for (t = 0; t < ARRAYSIZE/NUM_FREQS; ++t) {
			buf[t + ((ARRAYSIZE / NUM_FREQS) * i)] = floor(a * sin(2*PI*f*t/RATE));
		}
	}
	writeToSoundDevice(buf, deviceID);
}

Program 2.4 Adding sine waves and sending sound to sound device in C++

To be able to compile and run a program such as this, you need to install a sound library in your Linux environment. At the time of the writing of this chapter, the two standard low-level sound libraries for Linux are the OSS (Open Sound System) and ALSA (Advanced Linux Sound Architecture). A sound library provides a software interface that allows your program to access the sound devices, sending and receiving sound data. ALSA is the newer of the two libraries and is preferred by most users. At a slightly higher level of abstraction are PulseAudio and Jack, applications which direct multiple sound streams from their inputs to their outputs. Ultimately, PulseAudio and Jack use lower level libraries to communicate directly with the sound cards.

Program 2.4 uses the OSS library. In a program such as this, the sound device is opened, read from, and written to in a way similar to how files are handled. The sample program shows how you open /dev/dsp, an interface to the sound card device, to ask this device to receive audio data. The variable deviceID serves as an ID of the sound device and is used as a parameter indicating the size of data to expect, the number of channels, and the data rate. We’ve set a size of eight bits (one byte) per audio sample, one channel, and a data rate of 44,100 samples per second. The significance of these numbers will be clearer when we talk about digitization in Chapter 5. The buffer size is a product of the sample size, data rate, and length of the recording (in this case, three seconds), yielding a buffer of 44,100 * 3 bytes.

The sound wave is created by taking the sine of the appropriate frequency (262 Hz, for example) at 44,100 evenly-spaced intervals for one second of audio data. The value returned from the sine function is between -1 and 1. However, the sound card expects a value that is stored in one byte (i.e., 8 bits), ranging from -128 to 127. To put the value into this range, we multiply by 127 and, with the floor function, round down.

The three frequencies are created and concatenated into one array of audio values. The write function has the device ID, the name of the buffer for storing the sound data, and the size of the buffer as its parameters. This function sends the sound data to the sound card to be played. The three frequencies together produce a harmonious chord in the key of C. In Chapter 3, we’ll explore what makes these frequencies harmonious.

The program requires some header files for definitions of constants like O_WRONLY (restricting access to the sound device to writing) and SOUND_PCM_WRITE_BITS. After you install the sound libraries, you’ll need to locate the appropriate header files and adjust the #include statement accordingly. You’ll also need to check the way your compiler handles the math and sound libraries. You may need to include the option –lm on the compile line to include the math library, or the –lasound option for the ALSA library.

This program introduces you to the notion that sound must be converted to a numeric format that is communicable to a computer. The solution to the programming assignment given as a learning supplement has an explanation of the variables and constants in this program. A full understanding of the program requires that you know something about sampling and quantization, the two main steps in analog-to-digital conversion, a topic that we’ll examine in depth in Chapter 5.

2.3.13 Modeling Sound in Java

The Java environment allows the programmer to take advantage of Java libraries for sound and to benefit from object-oriented programming features like encapsulation, inheritance, and interfaces. In this chapter, we are going to use the package javax.sound.sampled. This package provides functionality to capture, mix, and play sounds with classes such as SourceDataLine, AudioFormat, AudioSystem, and LineUnvailableException.

Program 2.5 uses a SourceDataLine object. This is the object to which we write audio data. Before doing that, we must set up the data line object with a specified audio format object.   (See line 30.) The AudioFormat class specifies a certain arrangement of data in the sound stream, including the sampling rate, sample size in bits, and number of channels. A SourceDataLine object is created with the specified format, which in the example is 44,100 samples per second, eight bits per sample, and one channel for mono. With this setting, the line gets the required system resource and becomes operational.   After the SourceDataLine is opened, data is written to the mixer using a buffer that contains data generated by a sine function.   Notice that we don’t directly access the Sound Device because we are using a SourceDataLine object to deliver data bytes to the mixer. The mixer mixes the samples and finally delivers the samples to an audio output device on a sound card.

 import javax.sound.sampled.AudioFormat;
 import javax.sound.sampled.AudioSystem;
 import javax.sound.sampled.SourceDataLine;
 import javax.sound.sampled.LineUnavailableException;

 public class ExampleTone1{

   public static void main(String[] args){

     try {
         ExampleTone1.createTone(262, 100);
     } catch (LineUnavailableException lue) {
         System.out.println(lue);
     }
   }

   /** parameters are frequency in Hertz and volume
   **/
   public static void createTone(int Hertz, int volume)
     throws LineUnavailableException {
     /** Exception is thrown when line cannot be opened */

     float rate = 44100;
     byte[] buf;
     AudioFormat audioF;

     buf = new byte[1];
     audioF = new AudioFormat(rate,8,1,true,false);
     //sampleRate, sampleSizeInBits,channels,signed,bigEndian

     SourceDataLine sourceDL = AudioSystem.getSourceDataLine(audioF);
     sourceDL = AudioSystem.getSourceDataLine(audioF);
     sourceDL.open(audioF);
     sourceDL.start();

     for(int i=0; i<rate; i++){
       double angle = (i/rate)*Hertz*2.0*Math.PI;
       buf[0]=(byte)(Math.sin(angle)*volume);
       sourceDL.write(buf,0,1);
     }

     sourceDL.drain();
     sourceDL.stop();
     sourceDL.close();
   }
 }

Program 2.5 A simple sound generating program in Java

This program illustrates a simple of way of generating a sound by using a sine wave and the javax.sound.sampled library. If we change the values of the createTone procedure parameters, which are 262 Hz for frequency and 100 for volume, we can produce a different tone. The second parameter, volume, is used to change the amplitude of the sound. Notice that the sine function result is multiplied by the volume parameter in line 40.

Although the purpose of this section of the book is not to demonstrate how Java graphics classes are used, it may be helpful to use some basic plot features in Java to generate sine wave drawings. An advantage of Java is that it facilitates your control of windows and containers. We inherit this functionality from the JPanel class, which is a container where we are going to paint the sine wave generated. Program 2.6 is a variation of Program 2.5. It produces a Java Window by using the procedure paintComponent. This sine wave generated again has a frequency of 262 Hz and a volume of 100.

 import javax.sound.sampled.AudioFormat;
 import javax.sound.sampled.AudioSystem;
 import javax.sound.sampled.SourceDataLine;
 import javax.sound.sampled.LineUnavailableException;

 import java.awt.*;
 import java.awt.geom.*;
 import javax.swing.*;

 public class ExampleTone2 extends JPanel{

   static double[] sines;
   static int vol;

   public static void main(String[] args){

     try {
         ExampleTone2.createTone(262, 100);
     } catch (LineUnavailableException lue) {
         System.out.println(lue);
     }

     //Frame object for drawing
     JFrame frame = new JFrame();
     frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
     frame.add(new ExampleTone2());
     frame.setSize(800,300);
     frame.setLocation(200,200);
     frame.setVisible(true);
   }

   public static void createTone(int Hertz, int volume)
     throws LineUnavailableException {

     float rate = 44100;
     byte[] buf;
     buf = new byte[1];
     sines = new double[(int)rate];
     vol=volume;

     AudioFormat audioF;
     audioF = new AudioFormat(rate,8,1,true,false);

     SourceDataLine sourceDL = AudioSystem.getSourceDataLine(audioF);
     sourceDL = AudioSystem.getSourceDataLine(audioF);
     sourceDL.open(audioF);
     sourceDL.start();

     for(int i=0; i<rate; i++){
       double angle = (i/rate)*Hertz*2.0*Math.PI;
       buf[0]=(byte)(Math.sin(angle)*vol);
       sourceDL.write(buf,0,1);

       sines[i]=(double)(Math.sin(angle)*vol);
     }

     sourceDL.drain();
     sourceDL.stop();
     sourceDL.close();
   }

   protected void paintComponent(Graphics g) {
         super.paintComponent(g);
         Graphics2D g2 = (Graphics2D)g;
         g2.setRenderingHint(RenderingHints.KEY_ANTIALIASING,
                             RenderingHints.VALUE_ANTIALIAS_ON);

         int pointsToDraw=4000;
         double max=sines[0];
         for(int i=1;i<pointsToDraw;i++)  if (max<sines[i]) max=sines[i];
         int border=10;
         int w = getWidth();
         int h = (2*border+(int)max);

         double xInc = 0.5;

         //Draw x and y axes
         g2.draw(new Line2D.Double(border, border, border, 2*(max+border)));
         g2.draw(new Line2D.Double(border, (h-sines[0]), w-border, (h-sines[0])));

         g2.setPaint(Color.red);

         for(int i = 0; i < pointsToDraw; i++) {
             double x = border + i*xInc;
             double y = (h-sines[i]);
             g2.fill(new Ellipse2D.Double(x-2, y-2, 2, 2));
         }
    }
 }

Program 2.6 Visualizing the sound waves in a Java program

If we increase the value of the frequency in line 18 to 400 Hz, we can notice how the number of cycles increases, as shown in Figure 2.57. On the other hand, by increasing the volume, we obtain a higher amplitude for each frequency.

Figure 2.57 Sound waves generated in a Java program

Figure 2.57 Sound waves generated in a Java program

We can also create square, triangle, and sawtooth waves in Java by modifying the for loop in lines 49 to 52. For example, to create a square wave, we may change the for loop to something like the following:

 for(int i=0; i<rate; i++){
   double angle1 = i/rate*Hertz*1.0*2.0*Math.PI;
   double angle2 = i/rate*Hertz*3.0*2.0*Math.PI;
   double angle3 = i/rate*Hertz*5.0*2.0*Math.PI;
   double angle4 = i/rate*Hertz*7.0*2.0*Math.PI;

   buf[0]=(byte)(Math.sin(angle1)*vol+
	Math.sin(angle2)*vol/3+Math.sin(angle3)*vol/5+
	Math.sin(angle4)*vol/7);
   sdl.write(buf,0,1);
   sines[i]=(double)(Math.sin(angle1)*vol+
	Math.sin(angle2)*vol/3+Math.sin(angle3)*vol/5+
	Math.sin(angle4)*vol/7);
 }

This for loop produces the sine wave shown in Figure 2.58. This graph doesn't look like a perfect square wave, but the more harmonic frequencies we add, the closer we get to a square wave. (Note that you can create these waveforms more exactly by adapting the Octave programs above to Java.)

Figure 2.58 Creating a square wave in Java

Figure 2.58 Creating a square wave in Java