Anda di halaman 1dari 4

musicworks #93 | fall 2005 14

OW, MORE THAN EVER, THE


pressure is on to develop multi-
disciplinary research projects that
bring together the expanding
universe of scientific knowledge and tech-
nical innovation with the equally evolving
world of avant-garde art forms. I write here
with a multidisciplinary focus in mind, in
order to share my knowledge of acoustics and
loudspeaker arrays with potentially interested
creators. Loudspeaker arrays are discussed
here at an introductory level intended to
stimulate sound artists and to provide exam-
ples of the potential connections and infor-
mal exchanges that we would like to expe-
rience more often in our lives as artists.
If computers have entered the world of
common musical instruments (Prophet
2005), so too have sound projection systems
for performances or sound installations.An
active investment in sound projection departs
somewhat from the commercial realm of the
classical two- or five- channel stereophonic
sound projection, and moves towards more
creative and engaging approaches to sound
projection. It implies a will to go beyond
technical limitations. I can here describe a
number of convenient and sthetically inter-
esting uses, and also provide some technical
knowledge;but,as with other musical instru-
ments, mastery will require practice.
Two situations illustrate the need for loud-
speaker arrays in common audio applica-
tions. Specific arrangements of compact and
calibrated loudspeaker arrays are often
required in concert settings for sound rein-
forcement.When there is both a large and
sparsely distributed audience, and a large
reverberating space, dispersed networks of
multiple speakers are necessary in order to
ensure even sound distribution (Davis 1997).
These two kinds of multiple-speaker systems
can be used creatively as a starting point for
artists and composers who want to investi-
gate particular aspects of live sound projec-
tion or sound installation. This article
describes three examples of uses for possi-
ble speaker configurations based on concepts
taken from sound field synthesis.All of these
examples can be effected with relatively sim-
ple technical means.
The use of multiple loudspeakers to
establish distinctive sound projection effects
is not new (Bayle 1993; Clozier 2001), but
some possibilities remain unknown or only
partly explored. Now, more than ever, one
attends concerts where the sound systems
are installed according to an acousmatic tra-
dition, which assumes a relatively small
audience more or less uniformly sur-
rounded by a given number of sound
sources. Another less-exploited school of
thought prefers a similar number of sound
sources placed in a very compact arrange-
ment at the front of the space in order to
intentionally produce spatial illusions.These
configurations are based on a technology
called Wave Field Synthesis (WFS).WFS
relies on older scientific notions attributed
to Christian Huygens, the seventeenth-cen-
tury scientist who, among other things,
influenced the conception of such wave
phenomena as light and sound. These
potential applications can open the way to
new practices without the use of more
expensive commercial technologies.These
concepts describe sound propagation from
point to point, and, more importantly,
demonstrate how an original source of
acoustical waves can be replaced by repro-
duction sources without altering the result-
ing patterns of air vibration.
The first example is illustrated in figure
1.This image represents what physically
occurs when twenty-four loudspeakers
placed 17.5 cm apart are each fed the same
signal. In this case, each amplifier and each
loudspeaker receive exactly the same electri-
cal signal.Therefore, even though each indi-
vidual source becomes the centre of a spher-
ical sound wave form,the combination of all
of the speakers produces the equivalent of a
plane wave in front of the multiple speaker
systems, at least in the horizontal plane.As a
result, as figure 1 illustrates, two listeners at
two different locations will be exposed to a
similar experience of directionality in the
sound, as indicated by the arrows shown on
the diagram.
BY PHILIPPE- AUBERT GAUTHIER
WITH ALAIN BERRY AND WIESLAW WOSZCZYK
n
d.i.y. music
Guest columnists techno tips for creative
musical development
creative sound
projection with compact loudspeaker arrays
fall 2005 | #93 musicworks 15
Figure 2 resembles the first, in that each
source produces the same signal; but in this
case the individual signals are delayed in
time.A time delay of one millisecond was set
between the neighbouring sources (zero
time delay corresponds to the top part of fig-
ure 2). In a practical situation, this result can
be reached by using cascaded delay mod-
ules within an open audio-digital environ-
ment such as the Pure Data computer-music
system. By imposing such a cascaded struc-
ture controlled by a single value of tempo-
ral delay,it is possible to control the perceived
direction of the resulting sound.Another dif-
ference between figures 1 and 2 is that the
simulation shown in figure 2 represents a
progressive reduction of amplitudes near the
extremities of the array (the farthest source is
weighted by 25%, the next by 50% and the
third by 75%) resulting in a smoother wave
formation.
In figure 3, all the sources are fed with a
sole signal: a short group of three impulses.
Along with this signal, a group of three
impulses spread farther apart in time is sent
exclusively to the fifth source, filled in with
black in figure 3 (again, this could be
replaced by any other signal).These two sig-
nals could produce two separate auditory
events at two different listening positions;the
perceived directions are shown by arrows.As
suggested by the variations in perceived inci-
dences related to position A or B, an illusion
of perspective is created. Here, perspective
means that a movement of the listener
implies a coherent change in the perceived
spatial scene.Applied to the case represented
in figure 3, one perceived source implies an
important change of perceived direction
attributed to a movement from A to B, as if
the source was close to the listener.The other
perceived source does not imply a significant
change of perceived direction following a
movement from A to B, as if this second
source was farther away.
As the three figures show, these interesting
potential applications (including the per-
spective effect) could open the way to new
practices without the use of more expen-
sive commercial technologies (like the afore-
mentioned WFS, which is fundamentally
based on the same principles). All that is
needed is a broader education and under-
standing of the phenomena described in
the preceding paragraphs, along with artistic
experimentation.
One might ask,How can I build and use
loudspeaker arrays for musical perform-
ance?The hardware requirements are a
number of roughly similar loudspeaker cab-
inets, a set of corresponding multichannel
amplifiers, a multiple output sound card and
a computer (as described by Barry Prophet
in Musicworks 91).Once the compact array is
assembled and everything is wired, you then
configure the software. I strongly recom-
mend modular software like Pure Data and
Max/MSP to create your own multichannel
patch.A Pure Data example is shown in fig-
ure 4.This patch simply connects a given
sound signal (a sound object, a recorded
musical instrument or any electroacoustic
manipulation) to the loudspeakers after the
signal has passed through the time delays and
gain controls that you have set. Using only
eight loudspeakers, such a configuration
would produce something like the field in
figure 2, with cascaded delays of one mil-
lisecond. Slowly changing the time delay
from zero to one millisecond would trans-
form the impact of the plane wave from the
effects in figure 1 to those in figure 2.To
work with more than one sound signal, an
independent layering of such distribution
systems would be needed.The whole result
must then be sent to the array.The patch
shown in figure 4 is a simple introductory
example,somewhat limited but a good place
to begin.
If you are interested in real-time modula-
tion and array control for live spatialization
as a supplementary musical dimension, you
must modulate these gain and delay modules
in real time, using either standard computer
interfaces (keyboard and mouse) or other
hardware control devices (multimodal sen-
sorsmovement detectors, touch-sensitive
devices, etc.using MIDI connections) for
a more expressive or even global control (all
the gain changing at the same time, for
example). Of course, playing with time
delays and amplitude differences between
Fig. 3. Possibilities of perspective with
two perceived auditory events. The
sound field produced by feeding all
sources in phase by sending a three
band-limited pulse signal, and then
sending a three band-limited pulse
signal to the fifth source.
Fig. 2. The monoharmonic (400Hz) sound
field produced by feeding all sources a
delayed signal, i.e., with exactly the same
signal but with a 1ms delay between
each source.
Fig. 1. The monoharmonic (400Hz) sound
field produced in a horizontal space by
feeding all sources in phase, i.e., with
exactly the same signal.
musicworks #93 | fall 2005 16
loudspeakers in the array is only part of what
can be exploited in creating live perform-
ances and sound installations. Other effects
(equalization, distortion, etc.) for a given
sound signal can also be varied along the
array, which might produce interesting spa-
tial effects.
Although the frontal speaker arrangement
was undertaken as a more economic option,
the same principles can be used in spaces sur-
rounded by compact groups of sources.Cre-
ative experimentation will expand possibili-
ties for artists who desire to explore these
concepts in practical situations.The previous
examples are no more than a short glimpse
of the world of arrays.There are a huge num-
ber of technical and scientific publications
on this subject.
I can only hope that what I have shown
here with these simple examples might
encourage and inspire more creative experi-
ments.I suggest leaving behind any stringent
technical concepts or theories and instead
experimenting creatively,carefully observing
the resulting perceptions and auditory sensa-
tions obtained with this approach to sound
projection.
As a final and general note, I would like
to stress that technical achievements, as
impressive as they may be, should never
overrule the artistic and musical needs of a
creative production.This becomes a risk,
since using more or less complex technolo-
gies for artistic purposes often requires a high
technical investment on the part of the artist.
This can produce confusion between goals
that may not be compatible.Which is more
important, the success of the engineering
or the integrity of the artistic work? Tech-
nology and corresponding practical knowl-
edge would ideally be a source of inspiration
that would initiate a poetic direction rather
than becoming a dominating ideology.
This is neither a technophiliac or techno-
phobic statement (see Pasquier 2005 for a
concise review of technophilia and techno-
phobia). However, I would suggest that
technophobia (and a lack of technological
education) is surely a tricky position to
assume in an era when technology in every-
day life and culture is growing increasingly
important: the most dangerous position is
that of indifference.
Philippe-Aubert Gauthier is a Ph.D.student in
acoustics with the Groupe dAcoustique et de vibra-
tion de lUniversit de Sherbrooke and the Centre
for Interdisciplinary Research in Music,Media,and
Technology at McGill University. He is also
involved in visual and sound installation and in
various musical explorations.Thanks to Hugo L.
Fournier for help with the translation from French.
references
Bayle, Franois. 1993. Musique Acousmatique:
Propositions positions. Paris: Buchet/Castel.
Berkhout, Augustinus J., Diemer de Vries, and
Peter Vogel. 1993. Acoustic Control by Wave
Field Synthesis in Journal of the Acoustical
Society of America 93: 276478.
Clozier, Christian. 2001. The Gmebaphone
Concept and the Cybernphone Instrument
in Computer Music Journal 25(4): 8190.
Davis, Don and Carolyn Davis. 1997. Sound
System Design. Boston: Focal Press.
Gauthier, Philippe-Aubert, Alain Berry, and
Wieslaw Woszczyk. 2004. An introduction to
the foundations, the technologies and the
potential applications of the acoustic field
synthesis for audio spatialization on
loudspeaker arrays. <http://www.econtact.ca>
7.2.
. 2005. In-room Sound Field
Reproduction Using Optimal Control
Techniques: Simulations in the Frequency
Domain in Journal of the Acoustical Society of
America 117: 66278.
Pasquier, Philippe. 2005. A Reflection on
Artificial Intelligence and Contemporary
CreationThe Question of Technique in
Parachute 119: 15365.
Prophet, Barry. 2005. Your Computer As
Instrument: Part 2: Software Programs in
Musicworks 91: 1213.
Verheijen, Edwin N. G. 1998. Sound Reproduction
by Wave Field Synthesis. Ph.D. thesis. Delft: Delft
Technical University.
Fig. 4. A Pure Data realization for controlling (time delays and gains) a source signal sent to eight
loudspeakers. Signals flow from top to bottom.
Cet article propose les bases dune approche de projection sonore lectroacoustique
qui ne rside pas sur un dploiement de haut-parleurs autour dune petite ou moyenne
audience, mais bien sur une distribution compacte des sources. Avec une distribution
compacte frontale de haut-parleurs, il est possible de produire des perceptions
spatiales du son dune nature bien diffrente de celle obtenue par des distributions de
haut-parleurs uniformes en priphrie de laudience comme on peut souvent le vivre
lors de concerts typiquement acousmatiques.
rsum franais
fall 2005 | #93 musicworks 17
Coastal
t/k

Anda mungkin juga menyukai