Anda di halaman 1dari 6

The Murmurator:

A Flocking Simulation-Driven Multi-Channel Software


Instrument for Collaborative Improvisation
Eli Stine Kevin Davis
University of Virginia University of Virginia
elistine@virginia.edu kwd8ce@virginia.edu

ABSTRACT The Murmurator exists as a result of a shared interest be-


tween the authors in extending this set of tools to include
This paper describes the Murmurator, a flocking simula- systems that are designed for collaborative improvisation
tion-driven software instrument created for use by multiple in multi-channel loudspeaker contexts (a class of spatiali-
computer improvisers diffusing sound over a multi-chan- zation systems that Perez-Lopez finds scarce in his 2015
nel speaker configuration. Building upon previous pro- research [10]) and, further, to investigate ways in which
jects that use natural system models to control sound pa- the interactivity and liveness of spatialization being max-
rameters, the authors focus on the potentials of this para- imized in many of these systems can be extended to other
digm for collaborative improvisation, allowing for per- musical dimensions, in other words, “un-fixing” the musi-
formers to improvise both with each other and to adapt to cal materials that many of these systems take as immutable
performer-controllable levels of autonomy in the Mur- in the same way that spatialization is “un-fixed”, and sub-
murator. Further, the Murmurator’s facilitation of a dy- sequently handling the resultant cognitive load that such
namic relationship between musical materials and spatial- an all-encompassing real-time musical control paradigm
ization (for example, having the resonance parameter of a presents [1].
filter applied to a sound being dependent on its location in
space or velocity) is foregrounded as a design paradigm.
The Murmurator’s collaborative genesis, relationship to
2. MOTIVATION AND GOALS
improvisational and multi-channel acousmatic perfor- Keeping these shared interests in mind, the authors di-
mance practices, software details, and ongoing develop- rected their research with two primary goals:
ment are discussed.
Goal 1: To investigate the potentialities of adaptive per-
1. INTRODUCTION formance and improvisation in a multi-channel, multi-per-
former acousmatic context;
The history of performed electroacoustic spatialization can
Goal 2: To position real-time control of sound diffusion
be traced back to 1951 with the development and use of
and integration between spatialization and other musical
the potentiomètre d'espace by Pierre Schaeffer and Pierre
dimensions as a focal point of software instrument design
Henry at the Radiodiffusion-Télévision Française [1].
Since then, as more and more systems and spaces dedi-
Goal 1 stems from both authors’ improvisational back-
cated to multi-channel sound diffusion of acousmatic mu-
ground, their dissatisfaction with available methodologies
sic become available (non-standard multi-loudspeaker sys-
to perform and improvise in multi-channel contexts, and
tems such as the BEAST or Acousmonium or high density
the belief that real-time collaborative experimentation is
loudspeaker arrays such as the Espace de Projection or The
exceedingly important to multi-channel sound diffusion
Cube, for example [2]), so do tools (often software tools)
practice. Practically speaking, being able to perform on as
designed to control the projection of sound in space, offer-
many different speaker configurations as possible was of
ing more and more methods, algorithmic processes, and
interest to the authors, so a flexible, multi-purpose tech-
interfaces for dynamic control of spatialization.
nique for spatialization not wedded to a particular speaker
configuration was sought.
Each of these systems privileges different performance
contexts, technologies, and musical aesthetics, ranging in
Goal 2 seeks tools to engage with immersive electroacous-
performativity, generalizability, and extra-musical rela-
tic space that differ from 1) traditionally diffused acous-
tionships. Examples include Richard Garrett’s Audio
matic music performance and 2) discrete multi-channel
Spray Gun [3], Robert Normandeau’s Octogris [4], Scott
fixed media playback. Instead, the authors were interested
Wilson et al.’s BEASTmulch [5], Jan Schacher’s ICST
in a multi-channel performance software that not only po-
Ambisonics Tools [6], GRM’s spat~ [7], Ico Bukvic’s D4
sitions sound diffusion as a performative action but also
[8], and Natasha Barrett’s Cheddar [9]. For an in-depth
one that deeply affects the musical materials that are being
comparison see Perez-Lopez [10] and/or Marshall et al. [1]
performed with, in other words, a system that facilitates an
integration between spatialization and other musical di- Curtis Roads and Barry Truax, among others [12]. An in-
mensions. troduction to GS may be found in [13]. For our purposes
the wide number of input parameters, high level of control
The cognitive load of such a complex musical system may over textural and timbral dimensions of sound, and ability
be mitigated through a number of different methods [1]. to work with recorded sound made GS desirable. The gran-
First, the multiplexing of parameters, effectively tethering ular synthesis engine in the Murmurator mixes multiple
the control of multiple parameters to a single (or several) audio files chosen by the probabilistic sampling method
parameter(s), effectively reduces the input space. Second, outlined in 3.1 into a set of grain streams, each of which is
supervised stochastic processes may be applied to param- assigned a position in space by the natural system model
eters, allowing for overall qualities of the dynamics of the described in the next section.
input to be defined but not precisely managed over time, a
quality that can be advantageous in an improvisatory set- 3.3 Spatialization by Altering Parameters of a Natural
ting. Third, algorithms may be deployed that automatically System Model
control parameters, the meta-parameters of which are then Rather than controlling sound diffusion directly the au-
supervised by the performer. The relegated control of an thors wanted to relegate that element of control to the
instrument that deploys these methods suggests a perfor- changing parameters of a computer model of a system, a
mance paradigm that allows for processes to be set into system whose meta-parameters could then be performed
motion and tweaked (in the frequency, timbral, and/or spa- directly. Models researched by the authors included cellu-
tial domains) rather than having a one-to-one relationship lar automata, Lindenmeyer systems, ecosystem models,
between activating gesture and spatio-sonic utterance. molecular simulations, crowd behavior, and flocking sim-
ulations.
Summing this up, through research of these goals the au-
thors came to the following solutions, outlined in more de- The authors chose a simple flocking algorithm developed
tail in the next section: by Craig Reynolds titled Boids, which simulates the ways
in which flocks of birds self-organize using three simple,
Solution to Goal 1: Take advantage of real-time algorith- locally-controlled behaviors (Figure 1.) [14]. Boids was
mic and/or probabilistic processes to mitigate the cogni- chosen as system model because of its simplicity to imple-
tive load required to manage the many different extra-spa- ment, large but intuitive input dimensions, and direct map-
tial musical parameters of interest, and harness the flexi- ping to physical space. In the Murmurator each agent in
bility and speaker independence of ambisonics; the Boids system corresponds to a stream of grains in the
Solution to Goal 2: In tandem with goal 1, use performer- GS system, allowing for dynamic, non-random spatializa-
tweakable computer models to control sound diffusion, tion of the GS output.
and, further, use parameters of these models to affect other
musical dimensions, allowing for a given spatialization to
alter the musical materials that are being spatialized in
real-time.

3. DESIGN
As a function of these solutions, the authors decided on
creating a software instrument that combined the follow- Figure 1. The three primary behavioral rules that result in
ing components: emergent flocking behavior in Craig Reynold’s Boids

3.1 Probabilisitic Sampling The use of Boids in interactive spatial music and with GS
Probabilistic sampling is a system previously used by Stine has been researched by a number of different authors. As
part of his research into spatialization via particle systems
which allows for stochastic meta-control of sample selec-
David Kim-Boyle employs an implementation of Boids to
tion. Rather than sequencing a corpus of samples over time
control sound spatialization using a custom-made panning
the probabilistic distribution of the samples is controlled
system over a 5.1 channel system [15]. Kim-Boyle also ex-
by the user, relinquishing exact timing in exchange for the
ability to shape the likelihood of a sample’s sonic occur- periments with the relationship between spatial and musi-
rence. This system facilitates the use of real-world sound cal dimensions by “mapping the movements of particles to
rather than direct digital synthesis (an interest of both au- the spatial location of individual bins in a short-time Fou-
thors), but also allows for efficient high level textural and rier transform.” Scott Wilson makes use of a three-dimen-
gestural control. The stochastic nature of this system also sional flocking system to assist in spatialization of the
enables the injection of significant unpredictability, requir- BEASTmulch system [5], incorporating ambisonics to al-
ing the performer(s) to adapt to a given realization of their low for variable speaker configurations and also including
chosen distribution. control of parameters on the fly through SuperCollider
code and an external controller, although not putting em-
phasis on interface, adaptive performance, or improvisa-
3.2 Granular Synthesis
tional context [16]. Bates and Furlong employ Boids to
Granular synthesis (GS) is a highly developed sound syn-
generate spatial data used to create score files in Csound
thesis technique first described by Dennis Gabor, formal-
ized by Iannis Xenakis, and implemented on computers by which are performed by synthesis instruments including
GS [17]. Along with using higher order ambisonics Bates behavior. Each agent corresponds to a stream in a granular
and Furlong model the Doppler effect, early reflections, synthesizer that makes use of a probabilistically-controlled
and global reverberation, touching on (at a low, integrated corpus of audio files chosen by the user. The spatialization
level) the control over musical materials via spatialization of each grain stream is controlled in ambisonic space by
that the authors of this work are interested in. Schacher et the corresponding agent’s location. The grain streams are
al. use Boids-like swarm algorithms not only to generate further processed using delay, filtering, and distortion, the
sound but to produce visuals as well, extending this prac- parameters of which are influenced by the spatial charac-
tice to the audio-visual domain [18]. Rather than being per- teristics of the agents corresponding to each grain stream.
formed, high level control of the system is managed via a
Finite State Machine which automates large-scale changes 4. IMPLEMENTATION
in the system. In the “Future Work” section Schacher et al.
also express interest in relating spatialization to the sonic The Mumurator is programmed in Max 7, a media graph-
output of the system in an intriguing way: by “endowing ical programming language developed by Cycling ’74 on
agents with the capability to perceive aspects of the acous- an Apple MacBook Pro (running macOS 10.12.5). The use
tic output”, thus affecting their own behavior in response of laptops for programming and testing was intentionally
to the sound they produce [18]. chosen to enforce lightweight design for performance port-
ability. In addition to original code created by the authors
As the above examples show, Boids has been used numer- the Murmurator makes use of Jan Schacher’s ambisonic
ous times in spatialization contexts, but the authors were Max external objects, developed at the ICST at Zurich
curious to see how it could be extended to act as the focal University of the Arts [6], and an altered implementation
point of an improvisational system, simultaneously reduc- of Craig Reynold’s Boids flocking simulation (included in
ing cognitive load of spatialization control and taking ad- the Javascript examples of the Max distribution; no author
vantage of its emergent properties to enact both autonomy given) [14].
and multiplexed control [1]. The functionality of the program is divided into four main
modules: global settings and preset control, flock view and
3.4 System for Controlling Granular Synthesizer Pa- control, effector and group control, and granular synthe-
rameters via Model of Natural System sizer control. An outline of the functionality of each mod-
ule follows, supplemented with reflections on how the
The authors sought a means for having characteristics of functionality relates to the author’s goals.
the agents in the Boids model affect their corresponding
grain streams in order to create a deep connection between 4.1 Global Settings and Preset Control
spatialization and other musical dimensions. These con-
nections would go beyond integrated psychoacoustically-
driven spatialization effects (e.g. Bates and Furlong’s
modeled Doppler effect [17]) and would allow for expres-
sive control over the granular synthesizer via the model-
controlled spatialization. Such a system requires careful
selection of which model parameters will act as drivers
(agent velocity, acceleration, location, density, etc.) what
sonic dimensions will be affected (speed, amplitude, loca-
tion, duration, or the parameters of DSP effects applied to
the grain streams), and what interface will be created to
control these relationships in an intuitive, expressive way,
while still minimizing the amount of cognitive load on the
user. Figure 2. Global Settings and Preset Control Module

3.5 Higher Order Ambisonic Diffusion The Global Settings and Preset Control module allows the
user to specify how many agents the simulation has (sim-
Lastly, to allow for highly flexible and controllable object- ultaneously controlling the polyphony of the granular syn-
based sound diffusion (driven by the locations of the thesizer) and to load in a folder of audio files which act as
agents in the Boids simulation) the authors decided to work source sounds for the granular synthesizer. The probability
with higher order ambisonics rather than other alternatives of picking a source sound is controlled via a table (center
(vector-based amplitude panning, wavefield synthesis, left of Figure 2) whose members correspond to the audio
etc.). Portable performativity of the system is increased by file corpus: the higher the slider associated with a sound is
ambisonic’s ability to work with a variety of speaker con- the more likely it is to be granularized (if the slider for a
figurations, dependent on the ambisonic order and the par- corresponding sound is all the way down it will not be
ticulars of the configuration [19]. heard), implementing the probabilistic sampling described
in 3.1. The speaker setup may also be adjusted here, allow-
3.6 Final Design Overview ing for toggling between stereo, quadrophonic, or octo-
In summation, the Murmurator is built around a two-di- phonic diffusion with the positions of speakers adjustable
mensional bird flocking simulation consisting of a set of in the ambisonic virtual soundfield.
agents under constraints that result in naturalistic emergent
This module also contains a preset system which allows a be recorded and played back using the automation record-
user to save a snapshot of all current settings of the pro- ing system described in section 4.1.
gram, recall this snapshot, and to linearly interpolate be-
tween different snapshots. In conjunction with the preset 4.3 Effector and Group Control
system each controllable user interface parameter may be
set to its value on snapshot 1 by hovering over it and press-
ing the ‘esc’ key, allowing a performer to quickly reset in-
dividual settings during performance. The windowing en-
velope for the granular synthesizer has its own control
here, along with a simple five band equalizer and limiter
to equalize and limit the master output of the system, re-
spectively.

An automation and audio recording submodule is also in-


cluded. The automation recording system allows the user
to record the performance of several select parameters by
arming them, pressing and holding the space bar, and al-
tering the parameters. They may then play the recorded au- Figure 4. Effector and Group Control Module
tomation back and loop it at different rates, effectively en-
abling parameters to “play themselves” and allowing the The Effector and Group Control module gives the user the
performer to focus on other parameters in the mean time. ability to control effectors, parameters that alter the influ-
The audio recording system allows the performer to record ence of the location and velocity of agents on GS parame-
the entire output of the system to disk as a third-order un- ters (Figure 4, left), specifically volume, speed, filter mix,
decoded ambisonic audio stream, which may then be de- frequency, and resonance, distortion, grain duration, start
coded to any number of speaker configurations (or binau- location, and rate. For example, a user can control how
ral) post-performance. much an agent’s velocity affects the speed of its corre-
sponding GS stream, or how much an agent’s distance
4.2 Flock View and Control from the center (in this ambisonics implementation, equiv-
alently distance from listener) affects the distortion applied
to the corresponding GS stream. Positive and negative cor-
relations can be chosen (by turning an effector’s dial right
and left, respectively), and an intensity dial affects the sum
of influence from velocity and listener distance for each
parameter.

The group control submodule (Figure 4, right) allows for


an entirely different method of control than the flock con-
trol submodule described in 4.2, incorporating built-in
functionality of the ICST ambisonics tools [6]. The agents
are divided into groups according to a set group size and
Figure 3. Flock View and Control Module
then spatial transformations (rotation, Brownian motion,
randomized movement in vertical or horizontal space only,
The Flock View and Control module gives the user visual
and others) are applied to each group of agents separately,
feedback on where the simulated agents are in two-dimen-
allowing for multiple spatialization logics to be in place at
sional space with respect to a unit circle (Figure 3, left),
a single time, supplementing the holistic flocking simula-
the locations of which are used to drive the spatial coordi-
tion spatialization control paradigm.
nates of each of the streams of granular synthesis in the
ambisonic virtual soundfield.
4.4 Granular Synthesizer Control
The flock control section gives the user high level control
The Granular Synthesizer Control module gives the user
over the flock: the ability to translate, scale, and rotate it,
control over all aspects of the polyphonic granular synthe-
along with low level, precise control over the parameters
sizer through controllers designed in concert with the pre-
of the flocking simulation included in this implementation,
vious modules. A compromise is made between global and
specifically separation, alignment, inertia, gravity, sepa-
individual grain control to allow for a balance between re-
ration threshold, coherence, friction, and maximum veloc-
duction of cognitive load and the ability to “zoom in” and
ity. In addition, a two-dimensional XY-slider meta-con-
intimately tweak the settings for individual grain streams.
troller (Figure 3, top right) controls groups of the low-level
The number of sounding grains may be controlled here (the
flocking simulation parameters, facilitating intuitive and
manipulation of which may also be recorded, played back,
expressive control over flocking “spacing” (friction, sepa-
and looped using the recording functionality described in
ration threshold, gravity) and “separation” (separation,
4.1) and stochastic ranges for the grain rate (msec), grain
maximum velocity, coherence) meta-parameters. Lastly,
duration (msec), and grain start location (percentage) may
control of the translate, scale, and rotate parameters may
be manipulated. Because of the probabilistic nature of 4.5 Performance Evaluation
picking source sounds (described in 4.1) the grain start lo-
cation is described as percentage of the sound file duration, The Murmurator was premiered November 9th, 2017 at
and may also be automated using one-dimensional Brown- The Bridge Progressive Arts Initiative in Charlottesville,
ian motion (random walk) and automatic forward and Virginia over an octophonic sound system (QSC K8s) with
backward scrub controls. each performer outputting eight channels of sound. The
audience (approximately 80 people) surrounded the per-
formers and were invited to both view what the performers
were doing and also move through the space during the
performance.

Figure 6. Premiere Performance by Authors

While rehearsing for the performance the authors collabo-


ratively curated complementary banks of sounds. A rough
form for the piece was then structured via the preset system
using these chosen sound banks. Performance involved
immediate or slow transitions to and from each preset with
the performative path navigated between each section
adapting improvisationally. The authors found that im-
provisation with the Murmurator often involved either fo-
cusing solely on controlling one of the software’s modules
or listening and intuitively tweaking individual settings in
all of the modules.

In attendance at the premiere performance of the Mur-


Figure 5. Granular Synthesizer Control
murator were a number of students from University of Vir-
The volume and speed of each grain stream may be con- ginia’s Spring 2017 Technosonics course, a large, under-
trolled, with the speed also controlled globally in equal graduate non-major course introducing the history, theory,
temperament using a keyboard interface (Figure 5., cen- and practice of electronic music and sound art. As part of
ter). The filter frequency of each grain stream may also be that course, students were asked to write descriptive con-
adjusted, along with the wet-dry mix, resonance, and cert reports, and 27 students wrote on the premiere perfor-
mance of the Murmurator. Across the concert reports three
choice of band or low pass filters. In order to enhance the
descriptions were shared by a number of students:
perceived density of the granular synthesizer, the delay
system for each grain stream consists of two delay lines
which stochastically change their delay length at a stochas- 1. The novelty of moving around during the per-
tically chosen rate and crossfade back and forth between formance (8 students)
one another to avoid any clicks or other sonic disturbances. 2. The perceived improvisational nature of the
The global delay wet-dry mix and feedback parameters work (7 students)
may be adjusted, along with stochastic ranges for the delay 3. The novelty of being able to see the performer’s
length (msecs) and delay change rate (msecs). Lastly, dis- tools (6 students)
tortion applied to each grain stream (a waveshaping func-
tion that emulates an overdriven tube-based circuit) may These comments speak to the success of the transparent,
improvised multi-channel performance environment facil-
be controlled globally, user manipulation of which may
itated by the Murmurator. The Murmurator is already un-
also be recorded, played back at different rates, and looped
dergoing changes as a function of this performance and
(as described in 4.1).
performances at NIME, ICMC, and CubeFest 2018, with
the most prominent of these outlined in the next section.
5. CONTINUING DEVELOPMENT [3] Garrett, Richard. Audio Spray Gun 0.8–the Generation of
Large Sound-Groups and Their Use in Three-Dimensional
5.1 Physical Performance Interface Spatialisation. Ann Arbor, MI: Michigan Publishing,
The Murmurator in its current form is controlled by track- University of Michigan Library, 2015.
pad and keyboard. Additionally, the software interface (in-
cluding all parametric control) could be embodied physi- [4] Normandeau, Robert. "Octogris2 et ZirkOSC2: outils
logiciels pour une spatialisation sonore intégrée au travail
cally via control surfaces or gestural controllers, allowing
de composition." Proc. of Journées d’Informatique
for easier synchronous control of multiple parameters than
Musicale (JIM), Montréal, Canada (2015).
is currently possible.
[5] Wilson, S., J. Harrison, and S. L. Ancona.
5.2 Expansion to Three Dimensions "Beastmulch." URL
Expanding the world of the flocking simulation to three https://www.birmingham.ac.uk/facilities/ea-
dimensions would allow for the Murmurator to be per- studios/research/mulch.aspx
formed in multi-height multi-channel loudspeaker sys- [6] Schacher, Jan C. "Seven years of ICST Ambisonics tools
tems. This expansion requires alterations to the currently for maxmsp–a brief report." Proc. of the 2nd International
used implementation of Reynold’s Boids code and to the Symposium on Ambisonics and Spherical Acoustics. 2010.
ambisonic encoding of the agents. [7] Carpentier, Thibaut, Markus Noisternig, and Olivier
Warusfel. "Twenty years of Ircam Spat: looking back,
5.3 Inclusion of Other Natural System Models looking forward." 41st International Computer Music
Rather than limiting the spatialization control to a single Conference (ICMC). 2015.
system model (Boids), many other simulations and models
[8] Bukvic, Ivica Ico. "D4: an Interactive 3D Audio Rapid
could be included in the software, each of which would Prototyping and Transportable Rendering Environment
have different input controls, spatial dynamics, and musi- Using High Density Loudspeaker Arrays." (2016).
cal potentials. Models explored by the authors include the
aforementioned cellular automata and Lindenmeyer sys- [9] Barrett, Natasha. "Interactive spatial sonification of
tems, along with more novel simulations such as water- multidimensional data for composition and auditory
fowl habitat models and RNA binding simulations. display." Computer Music Journal 40.2 (2016): 47-69.

[10] Perez-Lopez, Andres. "3DJ: A supercollider framework for


6. CONCLUSION real-time sound spatialization." Georgia Institute of
Technology, 2015.
The Murmurator builds upon previous projects that make
use of granular synthesis and natural system models in [11] Johnson, Bridget. "Emerging Technologies for Real-Time
multi-channel electroacoustic space, but differentiates it- Diffusion Performance." Leonardo Music Journal, vol. 24,
self by being designed explicitly to be used in a collabora- no. 1, 2014, pp. 13 - 15.
tive improvisation setting. This results in a number of sig- [12] Roads, Curtis. "Automated granular synthesis of
nificant, and musically compelling, changes to the system. sound." Computer Music Journal 2.2 (1978): 61-62.
To reduce the cognitive load required to manage both real-
time spatialization of sounds and other musical dimensions [13] Roads, Curtis. “Introduction to Granular
and simultaneously to give it an improvisational “voice”, Synthesis.” Computer Music Journal 12.3 (1988): 11–13.
distributed, relegated control permeates all levels of the [14] C. Reynolds. Herds, and schools: A distributed behavioral
Murmurator’s design, from its probabilistic sampler to the model. Computer Graphics, 21(4 (SIGGRAPH ’87
emergent properties of the flocking algorithm. Further, a Conference Proceedings)):25–34, 1987.
system to facilitate control of the influence of the spatiali-
zation model on other musical dimensions (effectors) es- [15] Kim-Boyle, David. "Sound spatialization with particle
tablishes a deep connection between the way sounds are systems." Proceedings of the 8th international conference
spatialized by the system and the processing of the sounds on digital audio effects (DAFX-05), Madrid, Spain. 2005.
themselves, effectively integrating the parameters of gran- [16] Wilson, Scott. "Spatial swarm granulation." Proceedings of
ular synthesis and spatialization in live performance. the 2008 international computer music conference. SARC,
Acknowledgments ICMA, 2008.

Special thanks to Natasha Barrett and Eric Lyon for inspir- [17] Bates, Enda, and Dermot Furlong. "Score File Generators
for Boids-Based Granular Synthesis in Csound." Audio
ing this work and the Virginia Center for Computer Music
Engineering Society Convention 126. Audio Engineering
for providing a space and speaker setup for development. Society, 2009.

7. REFERENCES [18] Schacher, J. C., D. Bisig, and M. Neukom. "Composing


with swarm algorithms—creating interactive audio-visual
[1] Marshall, Mark T., Joseph Malloch, and Marcelo M. pieces using flocking behaviour." Proceedings of the
Wanderley. "Gesture control of sound spatialization for live International Computer Music Conference. Vol. 2011.
musical performance." International Gesture Workshop. 2011.
Springer, Berlin, Heidelberg, 2007. [19] Blauert, Jens, and Rudolf Rabenstein. "Providing surround
sound with loudspeakers: a synopsis of current meth-
[2] Zvonar, Richard. "A history of spatial music." Montreal: ods." Archives of Acoustics 37.1 (2012): 5-18.
CEC (1999).

Anda mungkin juga menyukai