waveform of a sound is a graph of the way the pressure changes
wavefronts. This is usually a very convoluted pattern and the actual
the wave is not apparent from looking at the waveform. As a matter of
waveform does not usually repeat exactly from one cycle to another.
The waveform produced by simple harmonic motion is the SINE WAVE. We
sine wave by plotting the function:
To do this we divide up our graph paper horizontally into equal chunks
represent a time scale, and for each time t we want to plot, we
multiply t by
2[pi]f (f=frequency) and look up the sine of the result. That sine
value is what
gets used for the vertical part of the graph.
There is also a function called a cosine wave.
and it looks just like the
sine wave. The difference is that the cosine of an
angle is equal to the sine of an angle 90 degrees bigger. When we have
waveforms which have the same shape and frequency but are offset in
time, we say
they are out of phase by the amount of angle you have to add to the
of the first to move them together. In other words the wave defined by
sin(2[pi]ft) is out of phase with the wave defined as sin(2[pi]ft+p) by
The second simplest waveform is probably the combination of two sine
combination of waves is interpreted by the ear as a single waveform,
waveform is merely the sum of all of the waves passing that spot. Here
are a few
rules about the addition of two sine waves:
- If both have the
same frequency and phase, the result
is a sine wave
of amplitude equal to the sum of the two amplitudes.
- If both have the
same frequency and amplitude but are
180 degrees out of
phase, the result is zero. Any other combinations of amplitude produce
of amplitude equal to the difference in the two original amplitudes.
- If both are the
same frequency and amplitude but are
out of phase a value
other 180 degrees, you get a sine wave of amplitude less than the sum
of the two
and of intermediate phase.
- If the two sine
waves are not the same frequency, the
result is complex. In
fact, the waveform will not be the same for each cycle unless the
one sine wave is an exact multiple of the frequency of the other.
If you explore combinations of more than two sine waves you find that
waveforms become very complex indeed, and depend on the amplitude,
phase of each component. Every stable waveform you discover will be
made up of
sine waves with frequencies that are some whole number multiple of the
of the composite wave.
The reverse process has been shown mathematically to be true: Any
be analyzed as a combination of sine waves of various amplitude,
phase. The method of analysis was developed by Fourier in 1807 and is
The actual procedure for Fourier analysis is too complex to get into
the result (with stable waveforms) is an expression of the form:
and so forth. The omega (looks like a w) represents the frequency in
second, also known as angular frequency. The inclusion of cosine waves
as sine waves takes care of phase, and the letters represent the
each component. This result is easily translated into a bar graph with
per component. Since the ear is apparently not sensitive to phase, we
simplify the graph into a sine waves only form. Such a graph is called
The lowest component of the waveform is known as the FUNDAMENTAL, and
the others are HARMONICS, with a number corresponding to the multiple
fundamental frequency. The second harmonic is twice the fundamental
the third harmonic is three times the fundamental frequency, and so
forth. It is
important to recognize that the harmonic number is not the same as the
equivalent musical interval name, although the early harmonics do
some of the intervals. The most important relationship is that the
numbered by powers of two are various octaves.
Non-repeating waveforms may be disassembled by Fourier means also, but
result is a complex integral that is not useful as a visual aid.
However, if we
disregard phase, these waveforms may also be represented on a spectral
long as we remember that the components are not necessarily whole
multiples of the fundamental frequency and therefore do not qualify as
harmonics. We should not say that a non-harmonic waveform is not
pitched, but it
is true that the worse the spectral plot fits the harmonic model the
difficult it is to perceive pitch in a sound.
There are sounds whose waveforms are so complex that the Fourier
process gives a
statistical answer. (These waveforms are the sounds commonly called
can express the likelihood of finding a particular frequency as a
a large enough time but you cannot assign any component a constant
describe such sounds on a spectral plot, we plot the probability curve.
narrow band of noise will sound like a pitched tone, but as the curve
lose the impression of pitch, aware only of a vague highness or lowness
Noise that spreads across the entire range of hearing is called WHITE
it has equal probability of all frequencies being represented. Such
high pitched because of the logarithmic response of the ear to
ears consider the octave 100 hz to 200 hz to be equal to the octave
1000 hz to
2000 hz, even though the higher one has a much wider frequency spread,
therefore more power.) Noise with emphasis added to the low end to
for this is called PINK NOISE.
A sound event is only partially described by its spectral plot. For a
description, we need to graph the way the sound changes over time.
There are two
ways in which such graphs are presented. In the Sonogram, the
horizontal axis is
time, the vertical axis is frequency, and the amplitude is represented
darkness of the mark. There is a machine that produces this kind of
Sonogram is a two dimensional image.
Spectragraph is a three dimensional graph.
The three dimensional graph gives a clearer sense of how the amplitudes
various components of a sound change. This really shows amplitude
each partial of the sound. In this case, frequency is represented by
apparent depth into the screen. Most analysis programs allow you to
high frequency behind low, as this one does, or low behind high. This
ability to swap parameters among the three axes allows you to pick a
the least information hidden.
Mathematics of Electronic Music
One of the difficult aspects of the study of electronic music is the
description of the sounds used. With traditional music, there is a
understanding of what the instruments sound like, so a simple notation
'violin', or 'steel guitar' will convey enough of an aural image for
performance. In electronic music, the sounds are usually unfamiliar,
composition may involve some very delicate variations in those sounds.
to discuss and study such sounds with the required accuracy, we must
tools of mathematics.
In dealing with sound, we are constantly concered with frequency, the
times some event occurs within a second. In old literature, you will
parameter measured in c.p.s., standing for cycles per second. In modern
the unit of frequency is the Hertz, (abbr. hz) which is officially
the reciprocal of one second. This makes sense if you remember that the
of a cyclical process, which is a time measured in seconds, is equal to
the frequency. (P=1/f) Since we often discuss frequencies in the
Hertz, the unit kiloHertz (1000hz=1khz) is very useful.
Many concepts in electronic music involve logarithmic or exponential
relationships. A relationship between two parameters is linear
constant ratio exists between the two. In other words, if one is
other is increased a proportianal amount, or in math expression:
where k is a number that does not change (a constant).
A relationship between two parameters is exponential if the expression
In this situation, a small change in X will cause a small change in Y,
moderate change in X will cause a large change in Y. The two kinds of
relationship can be shown graphically like this:
One fact to keep in mind whenever you are confronted with exponential
X^0=1 no matter what X is.
A logarithm is a method of representing large numbers. It is the
inverse of an exponential
relationship. If Y=10^X, X is the logarithm (base
10) of Y. This system has several advantages;
it keeps numbers compact (the log of1,000,000 is 6), and there are a
mathematical tricks that can be performed with logarithms. For
instance, the sum
of the logarithms of two numbers is the logarithm of the product of the
numbers-if you know your logs (or have a list of them handy), you can
large numbers with a mechanical adder. (This is what a slide rule
times the logarithm of a number is the log of the square of that
number, and so
We find logarithmic and exponential relationships within many places in
instance the octave's numerical relationship may be expressed as Freq=
F is the
frequency of the original pitch and n is the number of octaves you want
The strength of sounds, and related electronic measurements are often
in decibels (abbr. dB). The dB is not an absolute measurement; it is
the relative strengths of two sounds. Furthermore, it is a logarithmic
so that very large ratios can be expressed with small numbers.
computing the decibel relationship between two sounds of powers A and B
The Spectral Plot
A spectral plot is a map of the energy of a sound. It shows the
strength of each component.
Each component of a complex sound is represented by a bar on the graph.
frequency of a component is indicated by its position to the right or
its amplitude is represented by the height of the bar. The frequencies
marked out in a manner that gives equal space to each octave of the
spectrum. The amplitude scale is not usually marked, since we are
concerned with the relative strengths of each
component. It is important
to realize that whenever a spectral plot is presented, we are talking
contents of sound. In the example, the sound has four noticable
500 hz, 1000, just below 2000 hz, and just above 2000 hz.
Envelopes are a very familiar type of graph, showing how some parameter
This example shows how a sound starts from nothing, builds quickly to a
falls to an intermediate value and stays near that value a while, then
back to zero. When we use these graphs, we are usually more concerned
rate of the changes that take place than with any actual values.
A variation of this type of graph has the origin in the middle:
Even when the numbers are left off, we understand that values above the
positive and values below the line are negative. The origin does not
'zero frequency', it represents no change from the expected frequency.
The most complex graph you will see combines spectral plots and
envelopes in a sort of three dimensional display:
This graph shows how the amplitudes of all of the components of a sound
with time. The 'F' stands for frequency, which is displayed in this
with the lower frequency components in the back. That perspective was
because the lowest partials of this sound have relaltively high
different sound may be best displayed with the low components in front.
When we are discussing the effects of various devices on sounds, we
concerned with the way such effects vary with frequency. The most
frequency dependent effect is a simple change of amplitude; in fact all
electronic devices show some variation of output level with frequency.
this overall change frequency response, and usually show it on a simple
The dotted line represents 0 dB, which is defined as the 'flat' output,
would occur if the device responded the same way to all frequencies of
This is not a spectral plot; rather, it shows how the spectrum of a
be changed by the device. In the example, if a sound with components of
3kHz, and 8kHz were applied, at the device output the 1kHz partial
reduced by 2dB, the 8kHz partial would be increased by 3dB, and the
would be unaffected. There would be nothing happening at 200Hz since
no such component in the input signal.
When we analyze frequency response curves, we will often be interested
rate of change, or slope of the curve. This is expressed in number of
per octave. In the example, the output above 16kHz seems to be dropping
Once in a while, we will look at the details of the change in pressure
(or the electrical equivalent, voltage) over a single cycle of the
graph of the changing voltage is the waveform, as:
Time is along the horizontal axis, but we usually do not indicate any
the waveform of a sound is more or less independent of its frequency.
is always one complete period. The dotted line is the average value of
signal. This value may be zero volts, or it may not. The amplitude of
is the maximum departure from this average.
The most common waveform we will see is the sine wave, a graph of the
v=AsinT. Understanding of some of the applications of sine functions in
electronic music may come more easily if we review how sine values are
You can mechanically construct sine values by moving a point around a
illustrated. Start at the left side of the circle and draw a radius.
point up the circle some distance, and draw another radius. The height
point above the original radius is the sine of the angle formed by both
The sine is expressed as a fraction of the radius, and so must fall
Imagine that the circle is spinning at a constant rate. A graph of the
the point vs. time would be a sine wave. Now imagine that there is a
drawn about the point that is also spinning. A point on this new circle
describe a very complex path, which would have an equally complex
graph. It is
this notion of circles upon circles upon circles which is the basis for
concept of breaking waveforms into collections of sine waves.
This fanciful machine shows how complex curves are made up of simple
The Harmonic Series
A mathematical series is a list of numbers in which each new member is
by performing some computation with previous members of the list. A
is the Fibonacci series, where each new number is the sum of the two
numbers (1,1,2,3,5,8 etc.) In music, we often encounter the harmonic
constructed by multiplying a base number by each integer in turn. The
series built on 5 would be 5,10,15,20,25,30 and so forth. The number
used as the
base is called the fundamental, and is the first number in the series.
members are named after their order in the series, so you would say
that 15 is
the third harmonic of 5. The series was called harmonic because early
mathematicians considered it the foundation of musical harmony. (They
right, but it is only part of the story.)
One of the aspects of music that is based on tradition is which
sound may be used for 'correct' notes. The concept of the octave, where
is twice the frequency of another is almost universal, but the number
notes that may be found between is highly variable from one culture to
as is the tuning of those notes. In the western European tradition,
twelve scale degrees, which are generally used in one or two
seven. For the past hundred and fifty years or so, the tunings of these
have been standardized as dividing the octave into twelve equal steps.
western equal tempered scale can then be defined as a series built by
multiplying the last member by the twelfth root of two (1.05946). The
between two notes is known by the musical term interval. (Frequency
specifications are not very useful when we are talking about notes.)
smallest interval is the half step, which can be further broken down
hundred units called cents.
Equal temperament has a variety of advantages over the alternatives,
notable one being the ability of simple keyboard instruments to play in
The major disadvantage of the system is that none of the intervals
octave is in tune. To justify that last statement we have to define "in
tune". When two musicians who have control of their instruments attempt
play the same pitch, they will adjust their pitch so the resulting
sound is beat
free. (Beating occurs when two tones of almost the same frequency are
The beat rate is the difference between the frequencies.) If the two
play an interval expected to be consonant, they will also try for a
effect. This will occur when the frequencies of the notes fall at some
whole number ratio, such as 3:2 or 5:4. If the instruments are
equal tempered steps, that 5:4 ratio is unobtainable. The actual
(supposed to be a third) is almost an eighth of a step too large.
It is possible to build scales in which all common intervals are simple
of frequency. It was such scales that were replaced by
equaltemperament. We say
scales-plural, because a different scale is required for each key; if
a pure scale on C and one on D, you find that some notes which are
occur in both scales come out with different frequencies. String
and to some extent winds can deal with this, but keyboard instruments
you combine a musical style that requires modulation from key to key
popularity keyboards have had for the last two centuries you have a
where equal temperament is going to be the rule.
I wouldn't even bring this topic up if it weren't for two factors. One
the different temperaments have a strong effect on the timbres achieved
harmony is part of a composition. The other is that the techniques of
music offer the best of both systems. It is possible to have the nice
of pure scales and the flexability for modulation offered by equal
Composers are starting to explore the possibilities, and some
instrument makers are including multi-temperament capability on their
so the near future may hold some interesting developments in the area.
The Science of Electronic MusicDigital vs Analogue
What is wrong with analogue?
Not much really. If you keep them serviced, manually controlled and test that everything works every day.
What is wrong with digital?
The more general question is "What is wrong with digital processing?"
And I'm sorry to say that in some cases the answer is quite a lot.
Basically it comes down to either not enough silicon or not enough
knowledge, or both. There are certainly digital audio products which
are engineered to superlative standards but there's also a lot of
stuff, particularly inside PCs which truncates (not dithers) the audio
signal to ridiculously small internal word lengths, or doesn't
interpolate coefficients, or uses on-screen controls with far to little
accuracy or other basically silly techniques.
Waves From Numbers
Nearly all digital music systems use some form of wavetable
generate signals. The wavetable is a section of memory that contains a
values corresponding to the desired waveform. The computer reads the
from the list at a steady rate (the sampling rate), repeating the table
end is reached. If the table contains a single cycle of the waveform,
frequency produced would simply be the sample rate divided by the
values in the table:
The output is a very high fidelity copy of the waveform:
Fig. 1 Using all values in the wavetable gives an exact
copy of the stored
To produce higher pitches,
the system skips some values each time. The number
of values skipped is the sampling increment. A
sampling increment of 4
(reading every fourth value) gives an output two octaves higher than
Fig. 2 Effect of increases sampling increment.
The frequency produced is the original multiplied by the sampling
It is possible to have fractional
increments; the computer interpolates
between listed values, or simply reads a number twice..
of fact, the number
most recent value chosen is kept in a register known as the phase
accumulator, which has more precison than neccessary to
handle the size
table used. The high part of the p.a. points into the table, and the
contain the fraction. The sampling increment is added to the phase
during each iteration, which will produce the appropriate stepping
speaking, the value obtained from one wavetable
lookup is added to the sampling increment for another wavetable lookup.
Fig. 3 A sampling increment of .5.
Amplitude control can be added with a variety of techniques. The
way is to simply multiply the sample value by a number derived in a
manner from an envelope table. A more efficient technique is available
waveform is a sine. During each sample period two values are taken from
table: one found the usual way, and another at a location offset from
according to the envelope. The two values are then added before moving
output. The sum of two sine waves that are out of phase is a sine of
determined by the phase difference. If the offset equals half the table
the output will be zero.
Frequency modulation is a very powerful algorithm for creating sounds.
of the technique is the way extra tones (sidebands) are created when
oscillator is used to modulate the frequency of another. The carrier is
the oscillator we listen to; the
modulator is an oscillator that changes the
frequency of the carrier at
an audio rate.
These sidebands are symmetrically spaced
about the frequency of the carrier (If all numbers are read twice, the
is one octave down), and
the size of the spaces is equal to the frequency of the modulator.
modulation increases the number of sidebands, but the amplitude of the
varies in a rather complex way as the modulation changes.
Fig. 4 Spectrum of simple frequency modulation
There are three kinds of relationship between the frequencies of the
carrier and modulator, and each produces a different family of sounds.
If the modulator and carrier are the same frequency, all of the
be harmonics of that frequency, and the sound will be strongly pitched.
wonder how that can be if there are supposed to be sidebands at
lower than the carrier. If the spacing of the sidebands is the same as
carrier frequency (as it will be if modulator equals carrier), the
below the carrier will be zero in frequency. The sideband just below
be the carrier frequency, but negative. When that concept is applied in
the result is the carrier frequency, but 180deg. out of phase. That
therefore weakens or strengthens the fundamental, depending on the
index. Further low sidebands interact with upper sidebands in the same
regularity of the sidebands produces the strongly harmonic sound
associated with synthesizers, but if the modulation index is changed
note (dynamic modulation) the intensity of the sidebands will change in
very voicelike effects.
Fig. 5 Harmonic spectrum generated with FM
If the frequencies of the carrier and modulator are different but
rationally related, the result will again be strongly harmonic, and the
will be the root of the implied series. (For instance, frequencies of
500hz imply a root of 100hz. ) If the carrier is the higher frequency,
resultant sound will be quite bright, sounding like a high pass effect
modulation and becoming very brash as the modulation increases. The
the carrier is always prominent. If the carrier is the lower frequency,
sound will have "missing" harmonics, and those that are present will
appear in pairs (see figure 6). At low modulation index, you will hear
distinct pitches in the tone; as the index is increased, the timbre of
pitch seems to become brighter.
Fig. 6 FM with modulator frequency higher than carrier
If the frequencies of the carrier and modulator are not rationally
tone will have a less definite pitch, and will have a rich sound. Very
effect is of two tones, a weak pure tone at the carrier frequency, plus
sound with a vague pitch. With careful adjustment of the operator level
modulator, the carrier tone can be nearly eliminated. If the
frequencies of the
carrier and modulator are close to, but not quite harmonic, timbral
will occur at a rate that equals the difference.
Fig. 7 Nonharmonic FM spectrum
A particularly powerful aspect of frequency modulation as a music
technique is that the timbres can be dynamically varied. By applying an
function to the amount of modulation or the frequencies of carrier and
modulator, sounds can be produced that have a life and excitement far
that available with the older synthesis methods.
the Music Math in the Computer
of the lure of electronic music is the ability for one musician to
perform highly complex compositions, or for the composer to hear his
music without the need for performers at all. Splicing and digital
editing allows this of course, but it is very tedious. As soon as
analog synthesis became affordable, music engineers began looking for
methods of automatic control for the systems.
was too expensive to contemplate in the early days (computer rental was
over a million dollars), so a variety of techniques were tried: punched
paper tape (Babbit's work on the RCA machine), recorded control signals
(Subotnik's Butterflies) and elaborate digital sequencers (early
Tangerine Dream). Some decent music was produced this way, but it was
still hard work and the results were not really that complex.
Electronic music that approaches orchestral music in scope had to wait
for the appearance of cheap personal computers.
schemes (1974-84) for connecting synthesizers to computers were
homemade or sold in small quantities by tiny companies. This led to a
variety of systems that were mutually incompatible and so idiosyncratic
that only their inventors could write software for them. The usual
approach was to connect extra circuitry to the computer that either
generated sounds directly or provided several channels of voltage
control for modular synthesizers.
In 1983, several synthesizer
manufacturers agreed on a communications protocol that would allow
keyboard synthesizers to control each other (MIDI). This was very
quickly picked up for computer applications, and today we have a mix
and match situation, where any of several computers can be connected to
one or more synthesizers, provided you have the proper software. MIDI
is not perfect (the keyboard orientation and the rather slow data rate
cause hassles), but it has provided an impetus for the development of
software, has lowered the costs of computer assisted music, and has
attracted many new musicians into the field.
Instrument Data Interface specification defines both the organization
of the information transmitted and the circuitry used to connect
systems together. The wiring is similar to that used for microphone
cables, two wires within a shield. (The MIDI connector has five pins on
it, but two of those are not connected. This is done for economy: five
pin DIN plugs, widely used overseas for stereo gear, cost less than the
three pin model.) Exactly one input may be attached to each output.
Multiples are not allowed, but most devices have a "MIDI-THRU" output
that simply passes data to the next device down the line. The basic
configuration of equipment is a daisy-chain, with one master device
controlling a series of slave synthesizers. An alternative arrangement
is sometimes used where the data from the controller goes to a splitter
box that feeds the data to several outputs, each connected to one
MIDI is a serial system. That means data is fed
down a single wire one bit at a time. The bits are generated at the
rate of 31,250 per second, but it takes ten bits to make a character
and up to three characters to make a message, so it takes most of a
millisecond to get anything said. As a rule, each action taken on the
keyboard (such as releasing a key) generates a message. The typical
message contains a channel number, a code for the key or other control
affected, and descriptive data, such as key velocity. The channel
number indicates which instruments are to respond to the data. There
are sixteen channel numbers.
It is surprisingly easy to generate
a lot of MIDI data. For instance, many keyboards have aftertouch; a
feature that measures how hard you press on a key as you hold it down
and feeds that information into the data stream. If you hit a chord and
wiggle your wrists, you might generate several thousand bytes of data.
This data may be vital, or it may be useless, depending on exactly how
other instruments in the MIDI chain are voiced. When the data stream
gets too full, bizarre things begin to happen. Instruments slow down,
or messages can get lost. For this reason, many instruments and
programs have a filter feature which removes selected types of data.
You can even buy a special purpose box to do this.
of MIDI data cannot be mixed together in the simple manner two analog
signals can. The group of bits that makes up a message must be kept
intact or the meaning will be garbled. A device that combines MIDI
signals, called a Merger, has a microprocessor in it that can recognize
messages, assess a priority to them, knows how long the message should
be, and prevents potential collisions by storing low priority messages
until the output line is available. (This process is like switching
freight trains onto a common track with out getting the cars mixed up.)
are some other special tricks available in boxes. For instance there is
a MIDI Delay which simply stores data a while before sending it along.
If you connect an instrument's MIDI out to its own MIDI in through one
of these, you get some complex echo effects. Another type of box is a
Mapper which can change data to compensate for differences in
synthesizers. For instance, instruments often vary in the number of
presets the can store. If you are using a fancy machine to control
several simple ones, the fancy machine may implement all 128 preset
locations, and the cheapies may only have 32. When you select preset 33
on the main synthesizer, it will send program change 33, which may have
a unpredictable result on the slave. The mapper can be set to change
that program 33 to anything you desire. [These features are also
available as a part of better computer programs. Any synthesizer with
more than 128 presets must have some sort of mapping feature.]
type of box that is very popular is the MIDI patcher. This device has a
lot of inputs and outputs, say eight of each. Controls on the box
electrically switch inputs to various outputs, so you don't have to
fish around for the MIDI cables to change your system configuration. A
particularly intriguing feature is that a configuration can be assigned
a program number, so that the patch can be controlled over the MIDI
MIDI protocol is often badmouthed because the original intentions of
the designers are misunderstood. The system was created to allow a
simple, cheap, and universal interconnection scheme for instrument
controllers and synthesizers. The specification was developed by a
committee made up of representatives from several companies, and
contains many compromises between various needs and opinions. The
specification was inadvertently modified in translation to Japanese,
but since the company that made the mistake sells more synthesizers
than all other companies combined, their implementation became the
standard. The MIDI committee is still active, and adds features to the
specification from time to time.
The complaint heard
most often about MIDI is that it is too slow. It takes one millisecond
(1/1000 sec) to send the command that starts a note. This is musically
imperceptible ( in normal notation, MM=60,000) in simple pieces, but
the delay across a twenty note chord can be noticed by a keen ear. The
actual effect of this problem on the music is arguable (very few bands
are together within twenty milliseconds). Probably the worst case for a
performer is when the delay is unpredictably varied. The activities
that generate the most frustration are elaborate computer controlled
performances. The series connection MIDI system can clog up quickly
when detailed control of a lot of instruments is attempted. The cure
for this is to use a parallel connection scheme where the computer
itself has several MIDI outputs.
complaint is that MIDI sends the wrong information. It is clear that
the standard was written with keyboard controllers in mind, and that is
sensible, since the organ type keyboard is the most common controller
for polyphonic single performer instruments. It is quite difficult but
not impossible to design controllers with a continuous effect, such as
a wind or bowed string instrument has, but the speed problem becomes
extreme in such cases.
There is a proposal for a new standard, called "ZIPI" that addresses
these two problems.
perplexingly common occurrence is the stuck note. This happens because
each note needs a separate message for note on and note off. If the
note on is received, but the note off gets lost because of a loose
cable, the note will sound forever. With many synthesizers the only way
to get the note to shut up is to press many keys or turn the power off.
(Most will quit if you change presets.)
channelization scheme chosen causes a lot of confusion, but is not a
problem. The channel numbers are really a tag on each command, and
instruments have the option of ignoring commands that are not tagged a
certain way. Difficulties arise when sending devices and receiving
devices are not set to the same channel. The newer instruments can be
set up to follow different channels with different voices, and this
operation is often not clearly explained. The worst problem is that
channel setting is usually hidden deep within an instrument's menus
rather than on the front panel where it belongs.
is also some confusion about program numbers. The MIDI spec allows for
128 programs, numbered 0-127. Many manufacturers seem to feel that
musicians are not ready to accept the concept of program zero, and
number their buttons 1-128. Even worse are the systems that use funny
numbering schemes, such as 88 meaning program 8 of bank 8.
problems arise when one encounters a maverick corporation such as E-mu
or Oberheim that calls a zero a zero; and when you need to enter
program changes directly into a computer program. Of course the
widespread belief that 128 programs are not enough has thrown another
monkey wrench into the works as each company develops its own scheme
for calling up to 1000 presets.
One of the most
troublesome features is omni mode. A synthesizer set to omni will
respond to any MIDI message, regardless of channel assignments. A
typical problem this can cause is found when using Concertware: the
player sends initial program changes for all eight voices at the
beginning of a selection, even if there is nothing in some of the
voices. A synthesizer in omni mode will respond to all of the program
changes and wind up with the program number requested by voice eight.
It is a good idea to check the mode of the synthesizer first off, since
you don't know what the previous student was doing.(The only point to
omni mode is to make synthesizers easy to demonstrate. I think it ought
to be called "Salesman Mode".)
Overcoming these problems is a
challenge, but is similar to challenges musicians are already familiar
with. Here are a few guidelines to maintain sanity.
Use a simple
configuration, and stay with it. The MIDI system is designed to have
one master controller running a bunch of slaves. Mergers allow the use
of two or more controllers, and switchers allow quick reconfiguration
of the system, but there is usually little to be gained. The people who
repatch the MIDI lines a lot are usually trying to use a black box
sequencer and a keyboard as controllers at the same time.
overload the system. Always filter out unnecessary information.
Aftertouch, for instance should never be sent unless some device is
responding to it. If you are playing with a sequenced track, the pedals
are probably of interest only to the synthesizer you are playing.
the difference between OUT and THRU. OUT is information generated by
the instrument. THRU is a copy of the input data. A few devices such as
the Fadermaster provide a mix of the input and its own data at the OUT
Take care of your cables. The MIDI connector is not noted
for ruggedness and reliability. It is possible for a plug to look like
it is in, but be loose enough to stop the data.
Read the manual.
Read the Manual. READ THE MANUAL. Especially the part in the back that
shows which MIDI features actually work. Pay particular attention to
how to set the channel number and how to turn OMNI mode off.
Midi & the Machine
MIDI message can consist of from one to several thousand bytes of data.
The receiving instrument knows how many bytes to expect from the value
of the first byte of the message. This byte is known as the status
byte, the others are data bytes. Status bytes always have the most
significant bit (msb) equal to one and data bytes have an msb of zero
(If a status byte is received where a data byte is expected, the system
assumes a transmission error has occurred). Because the msb of data
bytes is always zero, actual values are limited to numbers less than
128. This restricts many things in the MIDI universe, such as the
number of presets available.
Status bytes inform the receiver as
to what to do with incoming data. Many of the commands include the
channel number (0-15) as the four least significant bits of the status
Commands are defined for about everything you would expect a
synthesizer to do, to wit:
* Note On
* Note Off
* Control Change
* Program Change
* Aftertouch (for the entire keyboard,
set by the heaviest push)
* Polyphonic aftertouch (values for each
* Pitch bend Note Messages
Note On and Note Off
most common status is note on. [The actual bit values are: 1001nnnn,
where nnnn gives the channel number.] Note on is followed by two data
bytes, the first is the note number, the second is key velocity. If a
keyboard is not equipped to sense velocity, it is supposed to send the
value 64. Not too surprisingly, there is a status called note off, with
the same data format. Note off is actually not used very much. Instead,
MIDI allows for a shorthand, known as running status (Running Status
applies to any status message, so this system can be used to send a
series of program changes or aftertouch messages just as easily). Once
a note on is received, an instrument interprets each pair of data bytes
as instructions about a new note. If the velocity data is zero, the
instrument performs a note off with velocity of 64.
of thinking, requiring separate actions to start and stop a note,
greatly simplifies the design of receiving instruments (the synthesizer
does not have to keep time), but creates the potential for hung notes
when a note off gets lost. The MIDI designers provided
features to compensate for this problem. There is a panic command, all
notes off, which is generated by some keyboards and even some special
The note numbers start with 0 representing the
lowest C. "Middle C" is supposed to be note 60. Middle C is usually
known as "C4", but for some reason most manufactures call it C3.
is a group of commands called control changes, that relate to actions
of things like foot pedals, modulation wheels, and sliders. Each
command has two parts, defining which control to change and what to
change it to. These are not very rigidly defined, so many systems allow
assignment of controllers as part of preset definition. These are some
of the official definitions: (numbers are actual data numbers)
* 1 Mod wheel
* 2 Breath controller
* 4 Foot controller
* 5 Portamento time
* 6 Data entry knob
* 7 Main Volume
* 8 Balance
* 10 Pan
* 11 Expression
controller usually has a single data byte, giving a range of 0-127 as
the value. This is rather coarse , so the controllers from 32 to 63 are
reserved to give extra precision to those assigned from 0 to 31.
The numbers from 64 to 69 are switches or pedals:
* 64 Sustain 65
* 66 Sostenuto
* 67 Soft
The numbers from 98 to 101 allow extened control changes called NRPN
& RPN values.
(pronounced "nurpin") stands for Non Registered Parameter number. This
allows companies to define their own extensions to the list of control
changes. The approach is to first tell what to change, then what to
change it to. There are two messages devoted to what to change- MSB and
LSB, or Most Significant and Least Significant Bytes. Together, they
indicate which parameter of the instrument to change. This limits a
manufacturer to 16,000 or so controls per instrument. The value is
transmitted in the "Data Entry", "Data increment" or "Data decrement"
controller messages. Data Entry (which has an optional LSB like any of
the frist 32 controls) sets the value. Increment and Decrement add or
subtract from the current value.
There are also a couple of RPNs
or Registered Parameter Numbers. RPNs allow the MMA to add defined
controllers as the need for them becomes apparent. They are almost the
same format as NRPN:
There are some specialized control messages:
* 121 Reset all controllers
* 122 Local Control
* 123 All Notes Off
* 124 Omni Mode Off
* 125 Omni Mode On
* 126 Mono Mode On
* 127 Poly Mode On
Controllers and All Notes Off have the obvious effects. What is not so
obvious is that neither will work on a synthesizer set to Omni mode.
Control allows you to disconnect a keyboard from the synthesizer it is
built into. The Keyboard still sends MIDI Data, and the synthesizer
still responds to MIDI data, but pressing a key will not necessarily
produce a sound. This is useful when you are using a computer based
sequencer and want the computer to have total control of the sounds.
0 is the Bank change message. A bank change followed immediately by a
program change should take you to a new sound on a different bank, but
the actual use varies from instrument to instrument.
change command is controller 0, optionally followed by its LSB partner,
controller 32. The change should not take effect until a program change
is received, so the messges sent should be:
* cntrl 0 bank number MSB
* cntrl 32 bank number LSB
* program change
synthesizers have enough banks to need both MSB and LSB, so a few
expect the number in the MSB and don't need an LSB. Others demand an
MSB of 0, with the bank number in the LSB. They are supposed to wait
for the program change before changing sounds, but a few change
instantly upon receipt of the bank change.
The actual bank
numbers are not always what you would expect. Many machines use 0 1 2..
etc, but the Roland JD 990 numbers its two banks 80 and 81. There is
also the same confusion about starting with 0 or 1 that we have on the
program numbers. Some instruments also use bank changes to switch
between performance and voice play mode.
modes take some explaining. When all this was set up, most synthesizer
keyboards were monophonic, like the Moog. (Monophonic here means they
would only play one note at a time.) A few instruments could play
chords, these were Polyphonic. The original MIDI spec assumed you would
use a MIDI channel to control each oscillator on an instrument or you
would have instruments that would play chords from one channel. No one
foresaw the current situation, where multitimbral synthesizers can play
chords in response to several if not all of the MIDI channels.
There are four possible combinations of the mode messages:
* Omni On, Poly On or Mode 1: The
synthesizer plays everything it gets.
* Omni On, Mono (Mode 2): The
synthesizer plays only the most recent note.
* Omni Off, Poly (Mode 3): The
synthesizer plays chords on one channel.
* Omni Off, Mono (Mode 4): The synthesizer plays the most recent note
received on its base channel. It also plays the most recent note
received on the next channel, and the one after that, until it's out of
There is no message for Multi mode, so it has to be chosen from the
sound of a synthesizer is determined by the connections between the
modules and settings of the module controls. Very few current models
allow repatching of the digital subroutines that substitute for
modules, but they have hundreds of controls to set. The settings are
just numbers, and are stored in computer type memory. In a computer, a
particular group of settings would be called a file. In synthesizers,
it's a Patch, Preset, Voice, or Tone for different brands, but the
official word is program. A MIDI message may call one of up to 128 of
these by sending data of 0 to 127.
Most modern synthesizers have
more than 128 presets. Different manufacturers and models implement a
variety of ways to make these accessible by MIDI commands:
On some instruments, 128 presets are called up by the Program Change
commands, but you can choose ahead of time which presets are called by
which command. You can assign preset 4 to Pgm Change 1, preset 205 to
Pgm Change 2, and so forth. This kind of list is called a Map, and is
occasionally used for other operations too.
instruments organize the presets in groups of 64 or 128. Then you pick
which group is in use at any time by pressing buttons on the
instrument. At least one of the banks will be writeable, and you can
copy presets into it if you want to combine some from different
permanent banks. Bank switching may be possible via MIDI, but the
method for doing this is not standardized.
instruments let you define a multi channel (or complex keyboard) setup
that combines various presets. These Performance setups (also called
Multis, or Mixes) are stored in a bank of their own. The Program Change
command then picks among these. Performance setups can also have
settings for processors, volume, pan, and so on.
instrument is in multi channel performance mode, program changes may
change the performance setup, or may change the program on a particular
channel. This depends on a setting hidden somewhere in the MIDI setup
of the instrument.)
Program changes have data values of 0 to
127, but are supposed to be called Programs 1-128. Many Synthesizer and
Software companies do not, so you basically have to experiment to find
out what will happen when a particular application sends a program
change to a particular instrument.
Most of the
wheels and knobs on a synthesizer generate control change messages, but
one gets a status message of its own. This is the Pitch Bender. A
dedicated message makes it possible to efficiently send a bend value of
14 bits. If you try to do pitch bend with only seven bits of precision,
you either have to restrict the range or you get audible steps.
Unfortunately, no manufacturer takes advantage of this.
many keyboards, if you lean into the key as you hold it down, you
generate controller messages. This is a very expressive feature. On
normal aftertouch (also known as Channel Pressure) the values sent
correspond to the key with the most pressure.
Aftertouch sends separate pressure information for each key. This is a
tremendous amount of information, and only a couple of synthesizers
respond to it.
The preceding messages are
Channel Voice Messages which apply only to instruments set to the
specified channel. System Messages apply to all machines:
* Song Pointer
* Song Select
* Midi Time Code
* Active Sensing
* System reset
the first of these commands, several sequencers or computers can be
cued to a preset point in a composition and run together. The clock
command is a single byte that is "broadcast" by a master sequencer at
the rate of 24 per quarter note. Sequencers can follow this clock and
stay in tempo. This clock can be recorded on tape and played back with
a suitable adapter. If this recording happens to be on a multi-track
tape deck, complex sequences can be built up using many passes with a
Song Select and Song Pointer cue up sequencers and drum machines, and
Start, Stop and Continue control their operation.
even more sophisticated synchronization system called MIDI Time Code is
now available. In this system, time markers are recorded continuously
on the tape. When the tape is played, sequencers will be automatically
cued to match the tape. (This is a version of SMPTE time code, which
does the same thing for video and audio editors.) Moreover, sequencers
can be set to start doing their thing at arbitrary points in the
composition, allowing such techniques as "slipping tracks" and
eliminating the tedious process of composing long sequences of rests.
sensing warns an instrument if there is a serious malfunction. Once the
active sensing command has been received, the instrument expects
something on the MIDI line at least every 300 milliseconds (If the
controller has nothing to say, it sends more active sensing messages.).
If nothing is received the instrument shuts all notes off.
System Reset is supposed to return synthesizers to their power Up
state. Hardly any recognize this.
final group of commands are the SYstem EXclusive commands. These are
commands that the manufacturer may define as they like. (Each
manufacturer is assigned an ID code to prevent confusion.) The data
stream may be arbitrarily long, terminating with a command known as End
of Exclusive (EOX.) These messages are used for passing preset
information, sequences, and even sound samples from one machine to
another, and provide the foundation for the editor/librarian computer
programs. Messages are not limited to program data; on the Yamaha
instruments, system exclusive commands can be used to control
everything, including the power switch.
Extensions To Midi
The Midi Manufactures Association has not stopped their work. Since the
initial definitions they have produced the following:
MIDI Time Code Described above, MTC made it possible to link MIDI
systems to video and other time based operations.
Sample Dump Standard This allows samples to be transferred from one
brand of sampler to another.
MIDI File This one allows MIDI tracks recorded on one sequencer program
to be used by another, even if it runs on a different kind of computer.
Show Control This defines ways to automate theatrical productions,
synchronizing lighting effects, sound, and even fireworks.
Machine Control This allows remote control of audio and video
recorders. With this and Time Code, you can run an entire studio from
MIDI is a response to a problem that arose with the popularity of the
Standard MIDI file. As composers began exchanging compositions (and
selling them) in SMF format, they discovered that pieces would change
when played on different synthesizers. That's because the MIDI program
commands simply provide a number for a preset. What sound you get on
preset four is anybody's guess.
General MIDI defines a standard
list of voices. (This list is a sort of snapshot of the synthesizers
that were popular in 1991. The easiest way to get it is to buy a GM
compliant synthesizer.) Not only the names are standardized-- envelope
times are defined so the right sort of textures are maintained.
Standard MIDI also defines channel 10 as the percussion channel, and
gives a map of the drum sound to associate with each note. A GM
instrument may create these sounds in any manner, so there's still a
lot of variation, but you no longer get a tuba when you expect a bass
Most synths that support General MIDI do so by providing a
bank titled GM. This is mostly a rearrangement of sounds from other
General MIDI is most important in the soundcards that
plug into PCs. These allow game programmers to create MIDI based scores
instead of including recorded sounds for the music cuts.
MIDI is coming to Macintosh computers as part of the expanded QuickTime
system. Midi scores will be playable with no synthesizers at all!
music, sampling is the act of taking a portion, or sample, of one sound
recording and reusing it as an instrument or a different sound
recording of a song. The wide spread use of sampling in popular music
originated with the birth of hip hop music in New York in the 1970s,
but in the groundbreaking music technology of granular synthesis,
sampling will be used to breakdown a soundwave into a fundamental
grain. Which can be applied to entirely new forms of composition or
used in a stochastic musical form. Sampling is typically done with a
sampler, which can be a piece of hardware or a computer program.
Sampling is also possible with tape loops or with vinyl records on a
phonograph. Breaking down a sample, soundwave or wave file into it's
components opens an entirely new set of mathematical elements. Musical
composers of the past sampled with tape, and by composers like Louis
& BebeBarron (The Forbidden Planet Soundtrack) or Iannis
spent long hours cutting tape into samples to score their compositional
works. Today this long process can be dramtically shortened by the use
of sampling hardware or computer software. These products sample audio
to a file that is either .WAV or .AIFF format.
WAVs and AIFFs are compatible with Windows, Macintosh, and Linux
operating systems. The format takes into account some differences of
the Intel CPU such as little-endian byte order. The RIFF format acts as
a “wrapper” for various audio compression codecs. Though a WAV file can
hold compressed audio, the most common WAV format contains uncompressed
audio in the linear pulse code modulation (LPCM) format. The standard
audio file format for CDs, for example, is LPCM-encoded, containing two
channels of 44,100 samples per second, 16 bits per sample. Since LPCM
uses an uncompressed storage method which keeps all the samples of an
audio track, professional users or audio experts may use the WAV format
for maximum audio quality. WAV audio can also be edited and manipulated
with relative ease using software. The WAV format supports compressed
audio, using, on Windows, the Audio Compression Manager. Any ACM codec
can be used to compress a WAV file. The user interface (UI) for Audio
Compression Manager may be accessed through various programs that use
it, including Sound Recorder in some versions of Windows. Beginning
with Windows 2000, a WAVE_FORMAT_EXTENSIBLE header was defined which
specifies multiple audio channel data along with speaker positions,
eliminates ambiguity regarding sample types and container sizes in the
standard WAV format and supports defining custom extensions to the
format chunk.There are some inconsistencies in the WAV format: for
example, 8-bit data is unsigned while 16-bit data is signed, and many
chunks duplicate information found in other chunks.WAV files can also
contain embedded IFF "lists", which can contain several "sub-chunks".
Mathematics in Music Composition
Schenker (June 19, 1868 - January 13, 1935) was a music theorist, best
known for his approach to musical analysis now called Schenkerian
The Schenkerian analysis was founded on that most musical
works have a fundamental tonal structure embracing the whole
composition. The Schenkerian technique reduces a composition down into
successive scores, each with fewer and fewer notes. The downward
progression from one score to another involved grouping notes together
and replacing each group by a single note. The final score, called the
background, contained only one note that represented the work's
fundamental tonal structure. This fundamental mathematical
decomposition process certainly derives many new musical
from the original.
For Schenker, the natural hierarchies of
music were part of a naturally ordered universe, and tonal music
inherently reflects this order no matter what choices the composer
makes to detail the music. His analytical system yielded its most
productive results when applied to music of the common practice.
Schenker did not consider any musical compositions that failed to
follow traditional principles of tonality. Among the Schenkerians
musical practitioners that persists to this day, Oswald Jonas, a
traditional disciple who is more strict about the theory than Schenker
himself, promoted the viewpoint that the analysis belonged only in the
realm of triadic tonal music. Hans Keller, who has worked on the
Functional Analysis form of composition, sees music as a constant
battle between repeated themes and new information. In 1967, Hans
Keller had a famous encounter with the rock group Pink Floyd (then
called 'The Pink Floyd') on the TV show The Look of the Week. During
the interview, Keller is quoted saying that Pink Floyd's music was
"...a little bit of a regression towards childhood".
Xenakis (May 29, 1922 – February 4, 2001) was an ethnic Greek,
naturalized French composer, music theorist, and architect-engineer. He
is commonly recognized as one of the most important post-war
avant-garde composers. Xenakis pioneered the use of mathematical models
such as applications of set theory, varied use of stochastic processes,
game theory, etc., in music, and was also an important influence on the
development of electronic music.
The use of the term stochastic
is to mean based on the theory of probability. In mathematics, specifically in probability theory, the field
of stochastic processes has been a major area of research. A stochastic
matrix is a matrix that has non-negative real entries that sum to one
in each row. Probability is the branch of mathematics concerned
with analysis of random phenomena. The central objects of probability
theory are random variables, stochastic processes, and events:
mathematical abstractions of non-deterministic events or measured
quantities that may either be single occurrences or evolve over time in
an apparently random fashion. Although an individual coin toss or the
roll of a die is a random event, if repeated many times the sequence of
random events will exhibit certain statistical patterns, which can be
studied and predicted.
representative mathematical results describing such patterns are the
law of large numbers and the central limit theorem. As a mathematical
foundation for statistics, probability theory is essential to many
human activities that involve quantitative analysis of large sets of
data. Methods of probability theory also apply to descriptions of
complex systems given only partial knowledge of their state, as in
statistical mechanics. Most introductions to probability theory
treat discrete probability distributions and continuous probability
distributions separately. The more mathematically advanced measure
theory based treatment of probability covers both the discrete, the
continuous, any mix of these two and more.
1.) Discrete probability theory deals with events that occur in
countable sample spaces.
Examples: Throwing dice, experiments with decks of cards, and
walking with no destination in mind.
2.) Continuous probability theory deals with events that occur in a
continuous sample space.
3.) Certain random variables occur very often in
probability theory because they well describe many natural or physical
processes. Their distributions therefore have gained special importance
in probability theory. They are; Weak convergence, Convergence in probability & Strong
music is the name given to a style of generation of musical ideas
developed by Iannis Xenakis, described in his book "Formalized
Music". This is not the same as random music, but rather describes a
technique for developing a musical progress with a random walk-like
Stochastic music emerged
in the years 1953-55, when
Iannis Xenakis introduced the theory of probability in music
composition. Xenakis decided to generalize the use of probabilities in music
composition. The work Achorripsis was his first work towards this
generalization. In Achorripsis, a small number of stochastic rules are
applied to generate both the parameters of the notes and the global
structure. The architecture of the piece can be read in a
two-dimensional matrix that is defined in a space where seven rows
representing seven groups of instruments evolve in time. During this time all the stochastic computations were made by
hand or with the help of calculating machines that were rudimentary. In
the 1960s, Xenakis started to use the computer to automate and
accelerate the many stochastic operations that were needed, entrusting
the computer with important compositional decisions that are usually
left to the composer. In Xenakis' work ST10, the composition of
the orchestra (expressed in percentages of groups of instruments) is
computed by the machine, as well as the assignment of a given note
to an instrument of the orchestra. At the end of the computation of the
musical work, the numerical results were transcribed into traditional
notation so that the music could be played by an orchestra.
the 1960s, Xenakis put forward the idea of extending the use of
stochastic laws to all the levels of the composition, including sound
production. Xenakis said, "Although this
program gives a satisfactory solution to the minimal structure, it is,
however, necessary to jump to the stage of pure composition by coupling
a digital-to-analog converter to the computer". This proposition was renewed in 1971: Any theory or
solution given on one level can be assigned to the solution of problems
of another level. Thus the solutions in macro-composition (programmed
stochastic mechanisms) can engender simpler and more powerful new
perspectives in the shaping of microsounds than the usual trigonometric
functions can ... All music is thus homogenized and unified. In the
1970s, at the University of Indiana, Xenakis experimented with new
methods for synthesizing sounds based on random walks, the theoretical
aspects of which are described in probability theory. In 1991 Xenakis
returned to his dream of making music that would be entirely governed
by stochastic laws and entirely computed. At CEMAMu, Xenakis wrote a
program in Basic that runs on a PC. The program is called GENDY: GEN
stands for Generation and DY for Dynamic; it generates both the musical
structure and the actual sound.