18. Song no 3 (2010)

Song no 3 (2010) for one performer, loudspeaker, microphone and live electronics
written for KOFOMI

Wings_Song12©SilvanaTorrinha

 

 

 

 

 

 

 

 

 

The performer holds a microphone and a large piece of white paper was glued onto the loudspeaker.

 

Song No 3 is a performance during which I use arm gestures, normally used by singers as a by-product of their singing performance, as a means to control electronic sound. I look for ways to use the singer’s arm and head gestures as a means to produce sound. Within traditional performance practice, a singer’s movements are incidental to the production of sound. I decided to use these movements for sound production in Song No 3 because they are related to making sound but not perceived as its main cause: a singer may be able to sing without any arm movements (although it will probably sound different), but definitely not without breathing and pushing the air through the vocal chords.

To render my arm and head movements audible, I decided to use a loudspeaker in front of my mouth to diffuse the sound during the performance. I am not singing at all during the performance. All sounds are generated by the computer software and emitted through the loudspeaker. A large piece of white paper was glued onto the loudspeaker to obscure my face. The audience’s attention was thus diverted from the mouth, which signals the presence of sound visually, to a focus on the gestures of the arms and head. The paper is attached directly to the loudspeaker membrane to resonate together with it and give the sound a more readily recognisable physical source. To make the sound interact with the movements of my arm, I used a microphone, exactly as singers do. The distance between the microphone and loudspeaker can be manipulated to produce different sound shaping processes. The amplitude of the microphone input signal is used by the computer to control what kind of sound processing is diffused through the loudspeaker. The foundation of this control mechanism is data feedback, since the sound of the loudspeaker is influencing the amplitude of the microphone signal, which is again controlling—with help of the computer software—the sound of the loudspeaker output. The same technology is used in, for example, Open Air Bach by Lara Stanic. Sound can therefore be controlled using the gestures that normally only accompany a singing voice.

 

Song3setup

 

The set-up for Song No 3

 

These movements normally used during singing can be seen as symbolic gestures: they have a specific meaning which points to an activity or something outside of the movements themselves. It is for this reason that I started to work with them, since they have a strong auditive connotation; we can almost hear a singer singing when we see the movements of a singer’s arm. To use these gestures to control sound is to make an intervention in their normal meaning by affixing to the gestures a different sound not previously associated them. The different relationships between sound and gesture can be composed, and the relationships, which are established during the piece, can be questioned during the same performance by establishing another gesture-sound relationship. Due to these constant changes, shifting from one to another identity, a fixed personality on stage can never exist.

Wings_Song09©SilvanaTorrinha

To make the sound interact with the movements of my arm, I used a microphone.

By using several kinds of gestures for controlling the sound and also some different sound processing possibilities, using the computer, the relationship between sound and gesture changes during the performance. In this way, not only the means of producing sound is different from how a traditional singer performs, but so are the changes in sound normally produced by movements with a microphone. Whereas during a normal singing performance, moving the microphone closer to the mouth will amplify the sound of the voice, during Song No 3, moving the microphone closer towards the loudspeaker attached before the mouth can result in all kinds of audible changes. The sound itself can change in a number of ways, since it is the amplitude of the microphone signal that is mapped on to several parameter changes. By moving the microphone, the sound may not only become louder or softer, but also faster, slower or noisier, or undergo a change in pitch.  In fact, every possible sound is available, and it becomes the task of the composer to choose what kind of sound should be produced by what kind of movement. I can, therefore, now decide which gesture makes what kind of sound in a manner that is quite different from a conventional acoustical instrument. I am thus able to compose the relationships between performer movements and sounding results, a concept I discuss in chapter 5.

Song3score

A page from the score of Song No 3. This score is for the version of Song No 3 for Ute Wassermann, which is slightly different than the version I perform myself.

In Song No 3, at the beginning of the performance, every time the microphone comes closer to the mouth-loudspeaker the sound becomes much louder and more chaotic. At the end of the performance, the opposite is the case. Only when the microphone is held very close to the mouth-loudspeaker, does the sound become quite soft and tranquil. As soon as the microphone is taken away from the mouth, a loud singing voice can be heard through the loudspeaker. The consequences of certain movements change during the performance; in this case from consequences we expect due to our cultural context (a microphone close to the sound source will normally amplify this sound source) and also consequences we do not necessarily expect. This change of consequences is a dramatic line during the performance, and due to these changes, the performer’s characteristics also change. These changes happen due to playing the instrument. By playing the instrument, the identity of instrument and performer shift (see more on gestural identity in chapter 5).

To compose such changing consequences between movements and sound, electricity is not only needed, but can be the means, by using loudspeakers and microphones, of producing such shifts in identity as described above. Since we do not connect a specific sound with them, as would be the case with, for example, a violin, they act like chameleons, which change their identity depending on the context. This context is formed by playing them and therefore giving them an identity. This change of identity is exactly what causes a different compositional approach: by using microphones and loudspeakers as musical instruments other aspects of composition can come into focus that differ from acoustic instruments: a composition can focus on identity shifts of the instrument itself, since this instrument does not have an identity as rigid as the piano or violin. The microphone-loudspeaker instrument is flexible in sound as well as in playing method.