• No results found

An Introduction to Interactive Music for Percussion and Computers

N/A
N/A
Protected

Academic year: 2021

Share "An Introduction to Interactive Music for Percussion and Computers"

Copied!
60
0
0

Loading.... (view fulltext now)

Full text

(1)

An Introduction to Interactive Music for Percussion and Computers BY

Copyright 2014 Von Elton Hansen

Submitted to the graduate degree program in Music and the Graduate Faculty of the University of Kansas in partial fulfillment of

the requirements for the degree of Doctor of Musical Arts.

________________________________ Chairperson Dr. Bryan Haaheim

________________________________ Ji Hye Jung ________________________________ Dr. Scott Watson ________________________________ Dr. Forrest Pierce ________________________________ Dr. Jacob Dakon Date Defended: 05/01/2014

(2)

The  Dissertation  Committee  for  Von  Hansen    

certifies  that  this  is  the  approved  version  of  the  following  dissertation:              

An  Introduction  to  Interactive  Music  for  Percussion  and  Computers                                                         ________________________________     Chairperson  Dr.  Bryan  Haaheim                               Date  approved:  05/01/2014            

(3)

Abstract  

 

Composers began combining acoustic performers with electronically produced sounds in the early twentieth century. As computer-processing power increased the potential for significant musical communication was developed. Despite the body of research concerning electronic music, performing a composition with a computer partner remains intimidating for performers. The purpose of this paper is to provide an introductory method for interacting with a computer.

This document will first follow the parallel evolution of percussion and electronics in order to reveal how each medium was influenced by the other. The following section will define interaction and explain how this is applied to musical communication between humans and computers. The next section introduces specific techniques used to cultivate human-computer interaction. The roles of performer, instrument, composer and conductor will then be defined as they apply to the human performer and the computer. If performers are aware of these roles they will develop richer communication that can enhance the performer’s and audience member’s recognition of human-computer interaction.

In the final section, works for percussion and computer will be analyzed to reveal varying levels of interaction and the shifting roles of the performer. Three compositions will illustrate this point, 120bpm from neither Anvil nor Pulley by Dan Trueman, It’s Like the Nothing Never Was by Von Hansen, and Music for Snare Drum and Computer by Cort Lippe. These three pieces develop a continuum of increasing interaction, moving from interaction within a fully defined score, to improvisation with digital synthesis, to the manipulation of computerized compositional algorithms using performer input. The unique ways each composer creates interaction will expose the vast possibilities for performing with interactive music systems.

(4)

Acknowledgments

I wish to acknowledge the guidance of my primary advisor Dr. Bryan Kip Haaheim. His expertise in electronic music has shaped my love and understanding of the genre. I would also like to thank professor Ji Hye Jung for her insight into the performance aspects of this document, and for 3 years of exceptional instruction. These advisors along with Dr. Forrest Pierce, Dr. Scott Watson, and Dr. Jacob Dakon were invaluable to completing my research.

Without the support provided by these professors and the rest of the faculty and staff at The University of Kansas, completing this research would have been a much more difficult task. The support of the school is rivaled only by that of my family, especially my wife Ashley who has supported my endeavors, monetarily and emotionally, throughout my collegiate education.

(5)

Contents

1. Introduction………....1

2. The Parallel Evolution of Percussion and Electronics………....5

2.1. Early Works for Percussion and Electronics………...…………...5

2.2. The Introduction of Max/MSP………....…………....8

3. Creating Interactive Music Systems...……….………...10

3.1. What is Interaction?...10

3.2. Interactive Music Systems……….……11

3.3. Human-Computer Interaction……….…...12

4. Methods for Fostering HCI in Interactive Music Systems………...14

4.1. Sound Synthesis……….……14

4.1.1. Signal Processing……….…...14

4.2. Generative Music……….…...15

4.2.1. Algorithmic Composition………...15

4.2.2. Composer Control in Generative Music……….……16

4.2.3. The Affect of Algorithmic Composition on the Performer………….…...16

4.2.4. Making Generative Music Interactive………17

5. Analysis of HCI in Selected Works………...…...18

5.1. 120bpm from neither Anvil nor Pulley by Dan Trueman………..18

5.1.1. Can a Metronome be Interactive?……….. 19

5.1.2. Roles of Performers and Computer...20

5.1.3. Interaction in 120bpm...22

5.1.4. Is this Work Interactive?...29

5.2. It’s Like the Nothing Never Was by Von Hansen………..29

5.2.1. Roles of Performer and Computer………..30

5.2.2. Developing Interaction through Randomization……….…31

5.2.3. Composing through Controlled Randomization……….33

5.2.4. Improvisation Influenced by Processing……….34

5.2.5. Reactive versus Interactive Music Systems………35

5.3. Music for Snare Drum and Computer by Cort Lippe………36

5.3.1. Roles of Performer and Computer………..37

5.3.2. Use of Notation to Establish the Composer’s Voice………..38

5.3.3. Interaction in Music for Snare Drum and Computer...41

5.3.4. High Level Interactivity……….……….46

6. Conclusion……….47

7. Bibliography………..49

(6)

1. Introduction

Composers began combining acoustic performers with electronically produced sounds in the early twentieth century, developing the medium of live electronic music. Early music for this medium involved analogue devices such as phonographs, microphones and tape recorders, which performed simple tasks of playback and amplification. Together, these devices introduced fixed media into the concert hall. The ability to bring prerecorded sounds into live performance created a boom in pieces for “performer and tape.”1 In the 1980’s the introduction of the computer changed the direction of live electronics by involving an increasingly more advanced performance partner. The computer, unlike fixed media, accomplished more meaningful tasks than playback by allowing composers to create communication between the performer and computer. As computer-processing power increased, the electronic partner was able to perform more complicated tasks, increasingly resembling human thought. This artificial intelligence led to an interest in developing a rich, interactive dialogue between human and computer, expanding the potential for significant musical communication.

Research concerning human-computer interaction (HCI) and its application to music has been increasingly explored over the last thirty years. Texts such as Stuart Card and Thomas Moran’s The Psychology of Human-Computer Interaction (1983) analyzed how humans interact with computers, while Robert Rowe’s Interactive Music Systems (1993) defined how human-computer interaction can apply to human-computer music systems. Simon Holland’s Music and

Human-Computer Interaction (2013) addressed further musical implications of human-computer

                                                                                                               

1 Performer and tape was the terminology of the time because this was the standard playback medium. Current terminology replaces tape with fixed media, as CD’s and digital audio files have replaced tape devices.

(7)

interaction by describing the multitude of ways that HCI is being used for research, instrument creation and in various other musical fields.

Despite this body of research, performing a composition with a computer partner remains intimidating for performers without experience in this medium. There are two main reasons why performers may be apprehensive about performing these types of works: technological concerns and a lack of understanding of interaction with a computer rather than with a human performer. Technological concerns are most easily addressed by finding someone with experience setting up the microphones, interfaces, speakers and software needed for performance. However, the ability to create an expressive, meaningful, musical performance with a machine is a more intellectually difficult task.

Given the depth of previous research on human-computer interaction and the

apprehensiveness that still remains in performance contexts, a resource is needed that can bridge the gap between research and practice. The purpose of this paper is to fill this need by providing performers with an introductory method for musically interacting with a computer partner. The development of artificial intelligence in the late twentieth century made it possible for composers to create communication between performers and computers and interactive music systems began to be created. Whereas the rules for performing with fixed media were easily discernable, interacting with an intelligent computer partner requires an understanding of how the computer is listening and responding to the performer. In order to create communication that is complex yet easily discernable to audience members, performers must be aware of the unique roles they play and how their input affects computation. This document defines the roles that composers apply to human and computer performers and explains some of the techniques

(8)

composers use to expose these interactive roles. Developing an understanding of these roles and techniques will allow performers to more effectively interact with a computer partner.

This document will first follow the parallel evolution of percussion and electronics in order to reveal how each medium was influenced by the other. The symbiotic development of these mediums explains why composers often choose to include percussion in works involving electronics, and, therefore, why this topic is of particular interest to contemporary percussionists. The following section will form a definition of an “interactive music system” by defining

interaction and explaining how this is applied to musical communication between humans and computers. The next section introduces some specific techniques used to cultivate human-computer interaction, especially those used within the selected works.

The roles of performer, instrument, composer and conductor will then be defined as they apply to the human performer and the computer. If performers are aware of these roles they will have a greater understanding of how their sound is being altered or how it is influencing the computer system. This understanding leads to richer communication that can enhance the performer’s and audience member’s recognition and appreciation of human-computer interaction.

In the final section, works for percussion and computer will be analyzed to reveal varying levels of interaction and the shifting roles of the performer. Three compositions will illustrate this point:

1. 120bpm from neither Anvil nor Pulley by Dan Trueman. 2. It’s Like the Nothing Never Was by Von Hansen.

(9)

These three pieces develop a continuum of increasing interaction, moving from interaction within a fully defined score, to improvisation with digital synthesis, to the manipulation of computerized compositional algorithms using performer input. The unique ways each composer creates interaction will expose the vast possibilities for performing with interactive music

(10)

2. The Parallel Evolution of Percussion and Electronics

Percussion and electronic music have experienced a parallel and often symbiotic evolution from the early twentieth century into the twenty-first. The interest in developing the two genres was sparked by Luigi Russolo’s 1913 manifesto The Art of Noises, in which he proclaimed, “Music, as it becomes continually more complicated, strives to amalgamate the most dissonant, strange and harsh sounds.”2 This manifesto was followed by a series of works for percussive noise machines. Russolo’s work expanded the definition of music by claiming that any organized sound, whether created by a violin or by a noise generator, can be music. These ideas placed a focus on timbre and texture and inspired a generation of composers to emphasize these musical aspects as the primary forces in their compositions. Composers on the forefront of this movement such as John Cage (1912-1992), Edgard Varèse (1883-1965) and Karlheinz Stockhausen (1928-2007) are known as pioneers in both percussion and electronic composition. These composers created the first works for solo and chamber percussion, groundbreaking pieces for fixed media, and the first integrations of acoustic and electronic sounds. These works reveal the importance of percussion and electronics to this compositional movement, and are the epitome of the symbiotic evolution of percussion and electronic sounds.

2.1. Early Works for Percussion and Electronics

John Cage is possibly the most influential composer in the development of percussion music. His pioneering use of electronics in his work is also among the earliest, predating nearly                                                                                                                

(11)

all others by a decade. In his 1939 composition Imaginary Landscape no. 1, for muted piano, cymbal and two variable-speed turntables, Cage presents the first substantial use of live, electronically created sound by precisely notating the duration speed and volume of turntables playing test tones.3 In 1942 John Cage created three pieces that combined percussion and electronics, each of which pushed the boundaries of electronic sound production and expanded the definition of a percussion instrument. Cage’s compositions Credo in US, Imaginary Landscapes no. 2 and Imaginary Landscapes no. 3 combine piano and percussion with an electric buzzer, a phonograph, a radio, audio frequency oscillators, and a microphone amplified marimbula.4 The phonograph and radio in Credo in US represent an early use of fixed media and in the Imaginary Landscapes Cage introduced amplification into chamber music, using the phonograph as a makeshift contact microphone and amplifying the marimbula. Cage’s experiments with texture are accomplished in these works not only by the innovative use of electronic sounds, but also by the inclusion of nontraditional percussion instruments such as tin cans, water gong, metal wastebasket and conch shell. In these pieces the unique timbres of percussion instruments help blend the electronic and acoustic sound worlds, setting a precedent for the consistent inclusion of percussion in future electronic works.

Edgard Varése is another composer who is highly regarded for his work in percussion music as well as electronic sound. His 1930 composition Ionisation is widely considered the first substantial piece for percussion ensemble. Influenced by this revolutionary composition, in 1954 Varése premiered Dèserts for winds, five percussionists and two tape recorders, a piece

                                                                                                               

3 Simon Emmerson, Living Electronic Music (Burlington, VA: Ashgate, 2007).

4 A marimbula is a large version of an mbira, or thumb piano, an instrument indigenous to Africa.  

(12)

commonly regarded as the first major work for acoustic instruments and tape.5 In this

composition the percussionists and the tape play similar roles of creating atmospheric clouds of texture underneath the wind melodies. This piece sets the stage for further explorations into works for performer and tape and it is the forbearer of Varése’s highly influential electroacoustic composition Poème Électronique (1958).

Karlheinz Stockhausen applied the performer and tape medium to chamber music with his 1958 composition Kontakte no 12.5 for electronic sounds, piano and percussion. In this work the performers are given a graphic representation of the electronic part and mostly standard notation for accompanying the tape. Along with the revolutionary techniques used in the electronic part, Stockhausen created a new percussion instrument for this work, a drum with a plywood drumhead, in order to reduce the resonance of the drum.6 This instrument is used to mimic some of the electronic sounds and is a prime example of composers of electronic music directly influencing the evolution of percussion instruments.

Stockhausen’s 1964 work Mikrophonie I explores interaction with live electronics and percussion in a revolutionary way. This work is written for two players striking a tam-tam with various implements and four others adjusting microphone positions, filters and potentiometers (more commonly known as volume knobs). Stockhausen provides descriptive words such as groaning, trumpeting, whirring, hooting, and a graphic score to define the desired sounds.7 The composer states, “The microphone is used actively as a musical instrument, in contrast to its

                                                                                                               

5  Olivia Mattis, “Varèse's Multimedia Conception of "Déserts," The Musical Quarterly, Vol. 76, No. 4 (Winter, 1992): 557.

6 Karlheinz Stockhausen, Kontakte no. 12.5 for Electronic Sounds, Piano and Percussion, (Kürten, Germany: Stockhausen-Verlag, 1960).

(13)

former passive function of reproducing sounds as faithfully as possible.”8 Unique textures are created through amplification of normally inaudible sounds and gestural movement of the microphones. Two other players adjust filters and distribute the sound through four speakers. The tam-tam part is unique for the extensive timbral exploitation using various mallets and household objects. All six of the musicians on this work interact with one another to create these unique sounds, and they also directly interact with live produced electronic sounds.

In the 1970’s the repertoire for instrumentalist and tape began to expand and percussion continued to be a large presence in the genre. Compositions such as Synchronisms by

Davidosky, Etude for Tape Recorder and Percussion by William Cahn and For Marimba and Tape by Wesley Smith continued the process of performing with prerecorded sounds. The medium of percussion and electronics continued to be primarily focused on works involving fixed media until the 1990’s. As computers became more advanced and portable, composers began to search out higher levels of interaction between performers and computers, leading to the development of advanced music processing software.

2.2. The Introduction of Max/MSP

In the late 1980’s the highly influential computer program Max (Cycling 74, 1990) was developed at the Parisian research institute IRCAM (Institut de Recherche et Coordination Acoustique/Musique). Max is an object based music programming language developed by Miller Puckette in 1988 and MSP is the digital processing portion of the program added in 1997. The Max/MSP program is possibly the most influential computer program for electronic

composition because it opened the world of computer composition to many new composers by                                                                                                                

8  Karlheinz Stockhausen, "Mikrophonie I for Tamtam, 2 Microphones, 2 Filter and Potentiometers." Texts on Music 3 (1971): 58.

(14)

creating a means of focusing on the sounds produced by a computer rather than the computational process of producing them.9

One of the first works for percussion and computer using Max is Kaija Saariaho’s Six Japanese Gardens (1993). Saariaho was able to create one of the first compositions for

percussion and electronics that moved away from fully fixed media. In this piece Saariaho uses a foot pedal to trigger prerecorded sound files instead of having a performer follow a fully fixed recording. The performer is no longer shackled to the fixed track; rather they are able to be expressive with time. In this work the composer is able to use Max/MSP not only to control the progression of the media but also to apply reverb and spatialization10 to the percussionist’s sound. Saariaho weaves together sections where the performer must strictly follow the recording with sections that are left more open in order to allow for performer interpretation.

As composers explore the possibilities of this program greater levels of interaction continue to develop. In Max/MSP computers are able to use real-time analysis to “understand” performer input, and this input can be used to alter compositional algorithms, allowing the computer to “respond” to this input. Composers continue to explore and expand the capabilities of Max/MSP, creating interactive music systems that increase the possibilities for human-computer interaction.

                                                                                                               

9 “What is Max,” Cycling 74 Website, accessed March 28, 2014, http://cycling74.com/whatismax/.

10 Spatialization, also know as panning, involves moving the playback of sound between speakers.  

(15)

3. Creating Interactive Music Systems

3.1. What is Interaction?

Interaction can be defined as reciprocal action between two or more entities. Live performance involving more than one performer is predicated upon interaction as musicians interact with one another to synchronize and present a unique, mutual, musical interpretation. According to composer of interactive computer music Cort Lippe, “This relationship involves a subtle kind of listening while playing in a constantly changing dialogue among players, often centering on expressive timing and/or expressive use of dynamics and articulation.”11 Traditionally notated scores define rhythms, pitches, and dynamics, but the subtle, subjective variations of tempo, volume and timbre within these constraints define musical interaction. Improvised music further develops musical interaction through mirroring and transformation of melodic, harmonic and rhythmic figures. All of these musical interactions boil down to

communication.

When considering music that includes electronic components various levels of

communication can be achieved. Works for fixed media contain no meaningful interaction. One

                                                                                                               

11  Cort  Lippe, "Real-time interaction among composers, performers, and computer systems." Information Processing Society of Japan SIG Notes (2002): 2.

(16)

can play along with the media but no matter what the performer does the fixed part does not change. The Saariaho piece discussed above begins to introduce interaction by allowing the performer to control the progression of fixed media. Computer processing of live sounds allows performers to be influenced by alteration of his/her sound. Interaction is increased further as computers are allowed to make decisions, through mathematical processes, causing performers to react to unpredictable events. The interaction between performers and computers becomes most apparent as the sounds created by each entity continually alter the actions of the other. In order to bring interaction into computer systems composers and software developers have to create a means for meaningful communication between humans and computers.

3.2. Interactive Music Systems

The introduction of advanced computer systems allowed for the development of

interactive music systems. In his groundbreaking book Interactive Music Systems Robert Rowe says, “Interactive music systems are those whose behavior changes in response to musical input.”12 The level of interactivity within a system is determined by the complexity of the communication between the performer and computer. The computer may take the input and perform digital signal processing, outputting a transformed version of the original material, or events may be triggered by the sound of a specific instrument. These systems become more interactive as a higher level of artificial intelligence is introduced into the system.

Artificial intelligence is the automation of processes associated with human thinking such as problem solving, decision making and learning. When applied to an interactive music system this involves allowing the computer to make choices either through randomization, algorithmic processes or analysis of musical input. Randomization produces a computer response that is                                                                                                                

12 Robert Rowe, Interactive Music Systems: Machine Listening and Composing (Cambridge, Mass.: MIT Press, 1993), 1.

(17)

different every time, allowing performers to be influenced by unexpected events. Algorithms can be used to create more informed decision-making, creating more predictable interactive music systems. The processing of randomization and algorithms can be made richer when performer input informs the computer’s decision-making, altering variables within the

computational process. When using advanced computer systems a composer is not limited to using pitch and rhythm as triggers. In these systems composers can use more interpretive aspects of a performance such as timbre and dynamics.13 The computer’s analysis of performer interpretation leads to more human-like responses.

Interactive music systems are used to create meaningful communication with humans; therefore it is important to note how performers respond to these systems. A performer listens to the computer-synthesized sounds and uses musical intuition to respond to what they have heard. In a traditionally notated score this may involve subtle changes in tempo, dynamics or timbre. If improvising the performers can alter their level of activity in response to the synthesized sound. Interactive music systems achieve profound interaction when a feedback-loop is created with performer input affecting computer output that is in turn reinterpreted by the performer. The ability to create complex communication in interactive music systems has lead to the

development of research in human-computer interaction in music.

3.3. Human-Computer Interaction

Human -Computer Interaction (HCI) is a term used in multiple scientific fields to analyze and design computer systems that are more usable and adapt to user needs. In the late 1970’s and early 1980’s monitors and workstations were developed, allowing for non-computer scientists to interact with computers, and developing an interest in how average users interact                                                                                                                

(18)

with computers. Stuart Card and Thomas Moran then popularized HCI as a research discipline in the book The Psychology of Human-Computer Interaction, which analyzed how humans interact with computers. 14 The field of study progressed from a focus on ergonomics to current HCI research involving, but not limited to, modeling human behavior on machine learning, user-centered development of information technology, and the development of new interactive technologies.15

Music and Human-Computer Interaction, a collection of writings concerning the application of HCI in music states, “Music Interaction encompasses the design, refinement, evaluation, analysis and use of interactive systems that involve computer technology for any kind of musical activity…”16 This definition shows the wide range of HCI in music from digital instrument design to music research methodologies to music analysis. The full application of HCI in music is too broad for the scope of this paper, so focus will be placed on the use of HCI within interactive music systems. The following section will outline some common techniques used to foster HCI in an interactive music system.

                                                                                                               

14  Stuart  K.  Card and Thomas P. Moran, The Psychology of Human-Computer Interaction (Hillsdale, N.J.: L. Erlbaum Associates, 1983).

15  Jenny  Preece et. all, Interaction Design: Beyond Human-Computer Interaction (New York: J. Wiley & Sons, 2002).

(19)

4. Methods for Developing HCI in Interactive Music Systems

4.1. Sound Synthesis

Interactive music systems often use a variable set of processes to synthesize new sounds. Stefania Serafin defines sound synthesis as “the production and manipulation of sounds using mathematical algorithms.”17 From this definition two main subcategories of sound synthesis can be determined, those that manipulate live input—signal processing—and those that create or playback sounds—generative music. Introducing a level of controlled randomization into these techniques creates space for the computer to make compositional choices, and using a live signal to effect these computations allows the computer to listen and react to its human counterpart. These two methods of synthesis create the reciprocal action required for HCI.

4.1.1. Signal Processing

The manipulation of live sound in an interactive computer system is achieved by digital signal processing (DSP) where the computer manipulates a digital copy of the live sound. Simple examples of this would be digital reverb, delay, distortion or pitch shifting. More

complex techniques include spectral analysis, additive/subtractive synthesis, cross synthesis, and granular synthesis. Spectral analysis involves the computer examining the pitch, timbre and                                                                                                                

17  Stefania  Serafin,  Computer  Generation  and  Manipulation  of  Sounds, in The Cambridge Companion to Electronic Music, ed. Nick Collins and Julio d’Escrivan, (New York: Cambridge University Press, 2007), 203.  

(20)

duration of musical input. The computer can then use this analysis to synthesize new sounds from the source material. Additive or subtractive synthesis either overlays new material at selected frequencies or reduces selected frequencies, creating alterations to the harmonic and timbral characteristics of a sound. Cross synthesis involves the merging of two signals to create a new sound containing variables from both. This is most often accomplished by combining the performer’s live signal with prerecorded material. Granular synthesis cuts a sound into short, usually less than 100ms long, sections called grains.18 The grains can be reordered to create a new sound or each grain can be analyzed and altered to expose the subtle details of a sound. The synthesis of new sounds through signal processing allows performers to interact with a modified iteration of their own performance, but all of the original source material comes from the

performer. In order for performers to be directly influenced by the computer some material must originate from the computer. To accomplish this goal composers apply generative music

systems to achieve computer composition.

4.2. Generative Music

To create fully interactive music the computer must synthesize some sort of original music. The production of new material rather than processing of acoustic signals is referred to as generative music.19 The most basic form of generative music is the playback of sound files, but a more complex version uses algorithms to create new sounds or alter sound files by defining specific parameters such as tempo, rhythm, pitch, timbre and dynamic level. With twenty-first century computing power, highly complex generative music systems are used to create artificial intelligence in a musical setting.

                                                                                                               

18 Peter Elsea, The Art and Technique of Electroacoustic Music (Middleton, Wisconsin: A-R Editions, 2013), 324.

19  Jon McCormack et. all, Generative Algorithms for Making Music, in The Oxford Handbook of Computer Music, ed. Roger Dean (New York, Oxford University Press, 2009), 358.

(21)

4.2.1. Algorithmic Composition

Generative music is often created through the application of algorithmic composition to a computer system. An algorithm can be defined as a set of instructions for solving a problem. The use of algorithms date as far back as Euclid in 300 BC, and algorithmic composition has existed in a basic form for centuries. Counterpoint, for example, is an algorithm that defines the composition of tonal music through voice leading rules. In the twenty-first century algorithmic composition is a term usually reserved for the application of algorithms to generate music using computers.

4.2.2. Composer Control in Generative Music

For generative music to sound cohesive with the performer, guidelines must be placed upon the generation of new material. In regards to algorithmic composition, the rules defined by the algorithm must have boundaries to avoid electronic parts that have no connection to the performer. Placing no restrictions upon a generative music system would be similar to writing, “play music” on a performer’s score. An infinite number of outcomes could happen from this lack of definition, eliminating composer influence.

Composers of generative music have to set parameters for the computer to follow, controlling the degree of randomness within the work. Variables such as pitch, duration,

dynamic level, and timbre can be defined to create unity within a computer system. For example a table of available pitches defining a specific scale or tonality may be created from which the computer can randomly choose. The controlled variation of note length generates reoccurring rhythms, while limitations on dynamic levels and timbre can define style. The parameters placed upon algorithmic composition are where composers assert their influence.

(22)

Algorithmic composition has particularly profound applications in HCI because it is not bound by human experience or preference. Karlheinz Essl explains

“[algorithmic composition] enables one to gain new dimensions that expand investigation beyond a limited personal horizon. From this basis algorithms can also be regarded as a powerful means to extend our experience – they might even develop into something that may be conceived as an ‘inspiration machine.’”20

Though, as stated above, the composer can influence generative music, the synthesized music is chosen through algorithmic composition involving randomization. The unpredictability of algorithmic composition, therefore, can make HCI richer, by challenging performers to react to an unexpected musical gesture.

4.2.4. Making Generative Music Interactive

If the music being generated is not influenced by the behaviors of the performer then the computer is not interacting with the performer. To create fully integrated interactive music signal processing and generative music must be combined, and performer input should trigger changes in algorithms. 21 HCI is heightened in these cases because an “interaction loop” continuously transforms the computer output as a performer reacts to new material.

                                                                                                               

20 Karlheinz Essl, Algorithmic Composition, in The Cambridge Companion to Electronic Music, ed. Nick Collins and Julio d’Escrivan, (New York: Cambridge University Press, 2007), 108. 21  Simon  Holland  et.  all,  Music  and  Human-­‐Computer  Interaction,  10.  

(23)

5. Analysis of HCI in Selected Works

Interactive music systems require performers and computers to adopt unique, often shifting roles. Cort Lippe defines these roles, “The computer can be given the role of instrument, performer, conductor, and/or composer… A performer is already a performer, and already plays an instrument; therefore, a composer can assign the role of conductor and composer to a

performer.”22 The computer as “instrument” can involve software instruments or signal

processing. The computer as a “performer” encompasses sound production through playback of fixed media or computer-generated sounds. When algorithmic composition is applied the computer may be a “composer,” and when the computer triggers or cues specific events it may “conduct.” The human performer can “conduct” by triggering events using some sort of physical interface such as a MIDI keyboard, computer keyboard or a pedal. The performer’s sound can also be used to trigger events, as in Music for Snare Drum and Computer by Cort Lippe. The performer as “composer” is most often achieved through improvisation, by allowing the performer to react to the computer in real time. The application of these roles will be used to expose varying levels of interaction in 120bpm by Dan Trueman, It’s Like the Nothing Never Was by Von Hansen, and Music for Snare Drum and Computer by Cort Lippe.

5.1. 120bpm from neither Anvil nor Pulley by Dan Trueman

In 2010 So Percussion23 commissioned neither Anvil nor Pulley from composer and director of the Princeton Laptop Orchestra (PLOrk), Dan Trueman. The forty-five minute work contains five movements; three prerecorded fiddle tunes, upon which the ensemble improvises                                                                                                                

22  Lippe, "Real-time interaction among composers, performers, and computer systems,"  2.   23 So Percussion is an American percussion quartet based in New York City, composed of Josh Quillen, Adam Sliwinski, Jason Treuting, and Eric Beach.

(24)

an accompaniment, and two longer movements for laptop percussion quartet. The second movement 120bpm will be discussed within this document due to its unique application of interaction. The instrumentation for 120bpm, a plank of wood, 2 tuned steel pipes, a tom-tom, two instruments of the performers’ choice, a MIDI foot pedal, and a tether video game controller, is mirrored in all four parts. Contact microphones are affixed to the wood plank and the tuned pipes, allowing them to be processed by the computer. The foot pedal is depressed to progress through computer presets, and the video game controller has been modified to control the playback of sound files. Trueman creates interaction in this movement using a resettable metronome and modified game controllers, both of which allow physical manipulation of the computer playback.

5.1.1. Can a Metronome be Interactive?

The subtitle of 120bpm -What is your Metronome Thinking?- reveals what Dan Trueman calls the “impetus of this movement.”24 The metronome referred to in the subtitle is a digital metronome that is restarted every time the percussionists strike the wood plank. The contact microphone on the plank does not amplify the instrument; rather it uses the amplitude

information of the plank as a control signal to restart the metronome. The first time the plank is struck it activates a metronome clicking every 500ms (120bpm), but once the plank is struck again the count-down to the next click is reset to 500ms.25 The effect created through this process is a metronome that can be manipulated by the performers. Since each percussionist controls a separate metronome there are four independently controlled metronomes in play throughout this composition.

                                                                                                               

24 Dan Trueman, interview by author, Lawrence, KS, Feb 21, 2014.

(25)

At first look this work seems to contain only simple, physical interaction, using the plank to restart the metronome, but in the program notes to neither Anvil nor Pulley Trueman begins to outline his view on how a metronome can become “musically” interactive. He states,

I’ve long been interested in how differently machines and people measure time.

Oddly, we assume that the machines are always “right,” whatever that might mean. But, for many, the “unhuman” quality of time that machines lend to music is heard as flawed and musicians know only too well how brutally unfair, and unmusical, metronomes can seem. In the second movement of neither Anvil nor Pulley, the machines and the humans duke it out.26

Unpacking this statement reveals a need for making the metronome more flexible, but it also involves how metronomes influence musician’s interpretation of time. Trueman describes the metronome in this work as moving from something we have to “obey” to something that “provokes” us to feel time in a physical way. Furthermore, he claims that the interaction with the metronome is able to “use the possibility of computation to get us to a place ‘musically’ where we just couldn’t get to otherwise.”27

5.1.2. Roles of the Performers and Computer

The resettable metronome, combined with a MIDI pedal, allows the performer’s main interactive role within 120bpm to be defined as conductor. By striking the plank the performer directly affects the progression of the metronome. The MIDI pedal is then used to move through a series of presets that determine the tempo of the metronome, the pitch shifting of the pipes, and the sound files for the tether controllers. Both of these actions control how the computer progresses through the composition. Though the performer’s main role in interaction is as conductor there are four short sections of improvisation in which the role is briefly shifted to composer.

                                                                                                               

26  Dan Trueman, neither Anvil nor Pulley (New York: Good Child Music, 2010).   27  Trueman,  interview.  

(26)

The computer’s major function is metronome playback; therefore its primary role is as a performer. However, the roles of instrument and conductor are also employed by applying signal processing to the steel pipes. The two tuned steel pipes are connected to the computer through contact microphones, allowing for a unique digital processing that engages the instrument and conductor roles. The pipes are processed using what the composer calls

“nostalgic preparation,” in which the processing is based on the tempo of the metronome. The computer “hears” that a pipe is struck, determines the time until the next metronome click, records the pipe for half of that time, then plays this recording backward. This process allows the attack of the reversed sample to sound in sync with the metronome (Figure 1). Since the reversed sample swells into the next click this “nostalgic” effect “conducts” the next occurrence of the metronome. The pitch of the pipe is also digitally shifted throughout the piece, creating counterpoint between the eight pipes. The nostalgic preparation and pitch shifting combine to create a unique digital instrument that is triggered by striking the pipe.

Figure 1- Nostalgic Pipe Effect28

The role of instrument is also engaged as sound files are manipulated using the game controllers. The game controllers will be discussed in further detail in the following section, but their basic application is as a gesture recognition system. The performers are able to use these                                                                                                                

28 Dan Trueman, Nostalgic Synchronic Etudes. Metronome  

(27)

game controllers to physically manipulate the playback of sound files, exposing complex timbres by focusing on single grains of the sampled sound.

5.1.3. Interaction in 120bpm

Trueman applies interaction in this composition through the use of a performer-controlled metronome and modified game controllers. The interaction in this work is interesting because it involves slight manipulations of fixed material. In most works involving fixed media the media part cannot interact with the performer. However, Trueman uses performer input to affect the playback of the computer, creating a way for the computer to respond to the performer. The metronome playback affects the performer’s perception of time, provoking the performer to respond to the computer.

Throughout this composition Trueman uses several different techniques to alter the interaction between the performers and the metronome. He moves the metronome to different parts of the beat to create unique rhythms and meters, allows improvisation with unsynchronized metronome clicks, and transforms the metronome click into unmetered gestures.

Trueman creates rhythmic and metric interest in the metronome by placing plank strikes on various parts of the beat. The use of this technique is apparent at letter B (Figure 2). The rhythmic variation of the metronome occurs first in m. 20 where plank strikes occur on the offbeat. Player one strikes the plank on the second eighth note of beat two, and this gesture is repeated, offset by a beat, by each successive player. The metronome is therefore reset to sound on the offbeat until the players strike the plank on beat 2 of m. 22. Another interesting rhythmic gesture is created in mm. 23-24 as players one and two create a 5:4 hemiola pattern. Each gesture is five sixteenth notes long and contains a pipe strike on the first sixteenth note followed by a plank strike on the second. Since the metronome naturally clicks every four sixteenth notes and

(28)

is reset by the plank the click sounds at the start of each gesture (in sync with the pipe), reinforcing the 5:4 feel. The final rhythmic gesture occurs on the last beat of m. 24 where the plank strikes are offset by one sixteenth note in each part. The product is a metronome click outlining every sixteenth-note of the bar. Trueman also uses this rhythmic offset to create meter changes, such as playing a plank on the sixth eight note of a measure to create a 7/8 bar or on the second sixteenth of a bar to create a 5/16 meter. The interplay of the computer and the performer in this section allows the metronome to be used as another instrument.

Figure 2- mm. 17-27 of 120bpm

Though the auditory affect of the offbeat metronome strikes is rhythmic interplay

between the performers and the four clicks, the performers’ perception of time within these bars creates a deeper level of interaction. Trueman influences how the performers interact with metronomic time through his use of notation. For example the 7/8 bar, m. 19, and the 5/4 bar, m.

(29)

20, contain the same gesture, a plank struck an eighth-note after the metronome sounds. However, in the 7/8 bar the performers’ perception is a lengthening of the measure, while the perception of the 5/4 bar is the shifting of the metronome to the off-beat. The notation affects how performers perceive time within these bars, though the metronome responds exactly the same. This is one way that Trueman allows the metronome to “provoke” the performers interpretation of time.

Trueman also uses the manipulation of rhythm and meter to develop a type of interaction that is unique to this piece. The composer pointed out that letter I (Figure 3) is particularly intriguing to him in the way that the metronome affects the normal interaction of chamber musicians.29 In this section a series of figures are repeated an indeterminate number of times allowing the performers to settle into the groove of each repeated figure. When performed without the click the chamber musicians follow one another and a “group mind” effect allows the musicians to settle into the groove. This groove is never mathematically perfect, but when the metronome is added the group has to account for the unyielding precision of a machine.

Trueman states “[the percussionists] have to settle into the groove in a different way and sort of rebound off the clicks.”30 This is an interesting concept because it moves from a one to one interaction between a computer and a performer to a four to one interaction between a group of chamber musicians and a computer. The rhythmic and metric manipulation of the metronome continues throughout the composition.

                                                                                                                29  Trueman,  interview.   30  Trueman,  interview.  

(30)

Figure 3- mm. 103-114 of 120bpm

Interaction with the metronome is further developed through the addition of brief improvised sections. The improvised sections, marked as “Free Gestural A-rhythmic (FGA),” are composed of a notated instrumentation, different in each part but always including the wood plank, and instructions to improvise without synchronizing. The metronome is still present within this section; therefore as the performers improvise the four unsynchronized clicks create constantly evolving rhythmic gestures. The improvised sections allow the performers, through manipulating the metronomes, to create a fifth “performer” comprised of the unsynchronized clicks.

Trueman advances the metronome interaction by transforming the 120bpm click into unmetered gestures, using two different processes. The first is an accelerating click that first occurs before letter P. The affect of the accelerating click is similar to a bouncing ball, where the beats of the metronome start out far apart but exponential become closer together. The

performers can restart this accelerating click as they have before. At letter Q (Figure 4) the performers interrupt the accelerating click by striking the wood plank after a desired number of pulses occur. The number of clicks within each bar is notated, as well as two to three instrument strikes occurring on specified clicks. The first beat of every bar is a wood plank in order to

(31)

restart the accelerating click, and the following notes, on either a pipe or a drum, create an accelerating rhythmic interplay between the two parts. The same concept of altered chamber music practice continues, as the performers must connect with each other while trying to place each of their notes with the accelerating click. The accelerating click creates another unique use of interaction with a mathematically fixed computer part.

Figure 4- mm. 203-205, 226-228 from 120bpm

The final use of the 120bpm metronome is increasing the speed of the metronome so much that rhythm becomes pitch. Trueman points to an article by Stockhausen in which he

(32)

describes how a repeated rhythm becomes a pitch as its speed is increased.31 In 120bpm Trueman uses the metronome pulses to create a pitch, layering metronomes upon each other, beginning with 30bpm, followed by 60, 120, 240 etc. until the rhythm of the metronome becomes a pitch. Transforming the pulses into a pitch creates an absence of metronomic time during a point where the performers are manipulating sound files, consisting of sustained tones, with the game controllers.

In the final section of 120bpm Trueman enables the performers to physically interact with the computer, utilizing a video game controller to control the playback of sound files. The controller used is the Gametrak tether controller (MadCatz, 2000) originally designed as a 3-D interface for PlayStation 2 and Xbox. Two retractable nylon tethers are run through a gear system to record the distance of extension (z axis) and the cords are run through joystick knobs to analyze their position in space (x and y axis). Though more advanced motion tracking controllers are available, the simplicity, accuracy, and low cost of the Gametrak has made it an increasingly prevalent tool in the electronic music world and in Trueman’s composition.

In 120bpm Trueman employs the Gametrak to directly interact with the computer through a graphic interface. The composer explains the use of the controllers,

The tethers control playback position in pre-recorded soundfiles. Each soundfile has two "events," which are clearly identifiable in the graphic display. The upper soundfile is controlled by the right hand, the lower by the left. The red vertical line indicates where the soundfile is currently sustaining. This is a spectral playback instrument, meaning it will sustain continuously at whatever point the tether indicates (the tether does not have to move to make sound).32

                                                                                                               

31  Karlheinz Stockhausen, “How Time Passes By,” Die Reihe (1959).   32  Dan  Trueman,  neither  Anvil  nor  Pulley,  2010.  

(33)

Figure 5- Interface for Gametrak Vocoder Control

For each sound file there are two recorded sounds, and the primary function of the notation is to indicate a movement between the two. For example when a note-head is present in the middle line of the staff the performer should move the cursor to the spot where the first sound ends and the second begins to expose the transition between the two sounds, as represented in the upper portion of Figure 5. If a note-head is near the bottom of the staff the performer should move the cursor to the right waveform, as represented in the lower portion of Figure 5, and if the note-head is near the top of the staff the cursor is moved to the far left.

Throughout this section Trueman uses this notation to guide the performer’s

manipulation of the time stretching feature of a phase vocoder, producing eerie, complex, long tones from freezing and stretching portions of the prerecorded fiddle sounds. A phase vocoder is an application of granular synthesis that can be used for time stretching or pitch shifting. As one could infer, time stretching is altering the length of a grain and pitch shifting is changing the perceived pitch. The unique aspect of a phase vocoder is that these processes can be achieved without affecting one another; therefore time stretching can occur without pitch shifting and vice versa. In 120bpm as the playback cursor is moved through the sound file the computer plays the selected grain of sound. If one stops in place this “freezes” the sound, and if one moves the

(34)

cursor slowly this stretches the sound file. Trueman uses this effect to expose subtle timbre changes in the sound files. By using the video game controller for gesture recognition the performers are able to physically interact with the computer playback.

5.1.4. Is this Work Interactive?

At first glance the score for 120bpm does not present obvious interactivity between the performers and the computer, but deeper analysis exposes Trueman’s unique application of interaction. The involvement of several shifting roles for both the performers and computer allows for various types of interaction, and the emphasis on developing complex ways of interacting with a metronome reveals musician’s unique interpretation of time. Developing computerized instruments that are controlled by striking pipes or gestural recognition through a video game controller enables the percussionists and computers to physically interact. Dan Trueman’s 120bpm exposes a unique perception as to what constitutes HCI.

Trueman uses the manipulation of fixed material to create interaction, but focusing on signal processing and live composition can create another level of interaction. Including a greater level of improvisation can develop the “performer as composer” role further, and using algorithmic composition creates a constantly changing computer part. In the following work, It’s Like the Nothing Never Was, Von Hansen manipulates live recordings of the performer to

develop an interactive music system.

5.2. It’s Like the Nothing Never Was by Von Hansen

It’s Like the Nothing Never Was is an interactive music system, developed in Max/MSP, designed to encourage interaction between performers and computers. Unlike 120bpm, It’s Like the Nothing Never Was contains no fixed or prerecorded material. The performer’s live input is recorded, and all of the sounds created by the computer are digitally processed versions of these

(35)

live recordings. There is no specified instrumentation nor is there a traditionally notated score, rather a document outlining the computers actions is presented. The timing, computer activity, and guidelines for performance are given for seven separate events. By supplying minimal information the composer intends for the performer to listen to the processing and allow it to influence performance. The composer has relinquished the majority of the control of the

composition by allowing the performer to improvise and by applying algorithmic composition to the computer part. The composer’s role is unique in this work because he has created an

interactive music system rather than a defined composition for performer and electronics.

5.2.1. Roles of Performer and Computer

The role of the performer in this work is as performer, instrument, composer and to a lesser extent conductor. Improvisation is simultaneous performance and composition, in this case using an instrument; therefore supplying no notated material allows the performer to maintain these roles throughout the entire work. The role of performer and instrument is obviously omnipresent, but the compositional role continues throughout as the performer responds to processed versions of their performance. The performer is allowed the freedom to conduct the piece, as each event is trigger by the performer using a pedal. The flexibility of improvisation allows for deep interactions with the computer system.

The computer plays the roles of composer, performer, instrument and conductor. Each event involves the computer recording the live performer input and manipulating the playback of these recordings, and the pitch, speed, spatialization and volume of the recordings are constantly manipulated through controlled randomization algorithms. This randomization of the program allows the computer to play the role of composer as it chooses how much to manipulate the sound. The computer plays back these recorded sounds therefore it is also a performer. The role

(36)

of instrument can be inferred from the application of reverb, delay and pitch shifting upon the performers sound, creating a digitally enhanced instrument. The computer is also a conductor because each event is timed and the performer follows a computerized clock that counts elapsed time in seconds, as seen in the middle of Figure 6, to help him/her follow the progression of the patch. The supplied document also refers to certain points where the processing should be allowed to develop before progressing. By enhancing the role the computer plays in this work the composer is able to define this program as an interactive music system.

Figure 6- Screenshot of It’s Like the Nothing Never Was 5.2.2. Developing Interaction through Randomization

As stated previously, this interactive music system contains no prerecorded material therefore all of the sounds generated by the computer originate from live recordings of the performer. Interaction with these recordings could occur if the material was presented exactly as

(37)

recorded, but interaction would become stale, as there would be no change in the performers expectations. The randomization of variables of these recordings enhances interaction because the performer is influenced by unexpected events.33

In It’s Like the Nothing Never Was several variables of the recorded sound are randomized, and the level of randomization is altered throughout the work. Randomized variables of playback include pitch, speed, volume, spatialization, and reverb level.

Randomization is achieved in this Max/MSP patch through the use of an object called “random” that chooses a random number within a specified range. The number value is then output and can be used to control any number of effects. For example, the playback speed is randomized throughout the composition, manipulating rhythm and pitch. Unlike the phase vocoder used in 120bpm, time and pitch are not controlled separately in this work. This is because a different type of processing is occurring. This process can be understood if one imagines changing the speed on a tape recorder. As the speed of the tape is increased the pitch increases as well. Conversely if the playback rate is decreased the voices become very low and drawn out. The manipulation of tempo and pitch can be altered to such an extent that they are very different from the original recording; however, since the recorded material is never played in reverse there is always a relationship to the original recording. The randomization of all of these variables helps to apply choice to the interactive music system, but another object is needed to make these sounds blend with the live performer.

It is the composer’s intention that the processed sounds are as natural as possible, so they blend with the acoustic instruments. If randomization were presented alone the playback would

                                                                                                               

33 Thomas E. Janzen, AlgoRhythms,:Real-Time Algorithmic Composition for a Microcomputer, in Readings in Computer-Generated Music, ed. Denis Baggi, (Los Alamitos, California: IEEE Computer Society Press, 1992).

(38)

jump to the chosen numbers, making for obvious changes. To make this interactive system more subtle and natural the “line” object is employed. The line object moves from one number to another over a specified amount of time. Changing amplitude using a line tool is like turning a volume knob on a stereo slowly rather than jumping from a soft volume to a loud volume

instantly. When applied to the playback rate, this tool creates slow pitch bends between notes as the playback speed is ramped up or down

5.2.3. Composing through Controlled Randomization

This work is completely improvised by the performer and the computer is making

compositional choices, so how does the composer influence the work? Though randomization is involved and the actual numbers chosen cannot be controlled, Hansen inserts his voice by

defining the range of numbers from which the computer can choose. The basic structure of this composition is an arch form with a definite climax. To build to this climax the composer begins with a small range of numbers and expands this range up to the climax. For example, the

playback rate of the first recording in Event 1 is defined as anywhere between 80% and 120% of the original speed. Further more the line object ramps between these numbers over a 20 to 25 second period. Each event triggers an increase in the range of the playback rate and a shorter amount of ramping time. By the climax of the work recording 1 uses a playback rate between 40% and 140% that is ramped every 5 to 10 seconds. Since the ramping rate is much quicker and the range between random numbers is often quite large the pitch bend effect is pronounced and active. By manipulating the randomization algorithm the composer is able to impose a certain level of control upon the interactive music system.

By the climax of the work the computer has recorded the performer six different times and these six recordings are manipulated through eleven different playback objects. For

(39)

example, the first sixty-second recording is placed in three different playback objects, each of which is working in a different playback range and with a separate panner. This creates pitch bends throughout the sound spectrum that swirl around the space and combine with one another. Hansen is able to increase the density of the texture by starting with only one of these objects active and introducing the others as the piece progresses toward the climax. By applying control of the randomization and manipulating recorded sounds using different computer processes the composer is able to define a basic structure within the interactive music system and insert his voice.

5.2.4. Improvisation Influenced by Processing

Not only is the performer interacting with the manipulated playback of his/her sound, he/she is also influenced by the real-time effects placed upon his/her signal. Reverb is used extensively throughout this work, and the decay time and simulated size of the space is consistently varied. Reverb has a profound effect on performer activity because highly reverberant spaces tend to communicate less action to a performer and drier spaces bring out more rhythmically dense figures. Another live processing technique used is applying four delay objects simultaneously to distribute an echo of the performer’s sound throughout the space. The performer must adjust his/her gestures to draw attention to the delay.

An object mirroring the phase vocoder is used to manipulate pitch. Four of these pitch shifters are employed, using the random and line objects to create a constantly moving, live harmonization of the performer. Randomized live-processing results in deep, subjective interaction because the performer must react instantly to how their sound is presented.

Performers are able to experience the manipulation of their signal in real-time and respond in a way the best compliments the characteristics of this sound.

References

Outline

Related documents

Pada kasus di atas, diduga pasien mengalami paparan gas CO konsentrasi tinggi lebih dari 10 jam yang disertai dengan kadar HbCO yang tinggi (57%) dapat menunjukkan tingkat

a 32 year old female suffers from severe brain damage following a motor vehicle accident.. After rehab she noticed that her thought processes and goal oriented behavior

Finally, students must pass Communicative English 3 (Level 3) when they are in semester five, where it focuses more on report writing skills, and the academic oral

Thus, the exercise of federal parity (i.e., invoking the powers of a national bank) would not generally benefit Bank if it sought to engage in direct insurance sales to consumers

An example is the pivotal role of local auxin biosynthesis in the endosperm for proper seed coat formation [94], the increased dormancy of IAA overproducing

1.3 This Standard provides requirements for those concerned with the design, installation, operation and maintenance of gas pipework, systems and appliances in

The product ion distributions for the gas phase anionic reactions with 1,3,5-triazine can be interpreted as a competition between a proton transfer pathway and the addition

Survival was better in conventional group (96%, 25 neonates) when compared to HF ventilation (43%, 3 neonates) or mixed ventilation groups (43%, 3 neonates), and severe