Schizophonies and the new electroacoustics

jjburred

Pitch, intensity, timbre and time. These are the four properties that are often cited as compositional parameters of a sound event. There is a fifth element whose mention is much less common, and whose use was anecdotal until well into the 20th century: space. Before the advent of recording and amplification techniques, composers experimented with the spatial dimension by placing instruments and singers at points distant from the listening room, as in Gabrieli’s late-renaissance polychoral works (inspired by the peculiar architectural layout of the Basilica of San Marco in Venice), or as in the occasional distancing from the orchestra of some instruments in numerous later symphonic and operatic works.

With electricity came loudspeaker systems and the possibility of freely distributing them in various configurations adapted to each room or work. It is worth distinguishing between mere amplification, intended to simply increase the sound intensity of the instruments [1], and spatialization, which implies the possibility of assigning precise positions (or trajectories) to each sound event, independently of the position of the original source. Spatialization therefore entails the upgrading of space into a parameter of the first order. In spatialized electroacoustic music concerts it is common to see arrays of loudspeakers arranged around the public, and in some cases, above or below [2]. There are numerous methods of spatialization using loudspeaker systems, from the classic surround of cinemas and living rooms to sophisticated techniques such as Ambisonics or Wave Field Synthesis, which can require (the latter in particular) hundreds of loudspeakers. A crucial point to keep in mind about spatialization using this type of system is that the projected sound (the virtual source) can be perceived as coming from any point located within the perimeter defined by the position of the loudspeakers.

When a loudspeaker system projects a sound to a position far from the original source, we have what R. Murray Schafer calls a schizophony, that is, the separation between a sound and its originating source. Radio and recordings are by definition schizophonic, as is an amplified music concert, where various instruments are usually mixed and broadcast from speakers on both sides of the stage, with the position of the performer losing all relevance. In the case of spatialization, the schizophonic effect is even greater, since systems such as the aforementioned allow sounds to escape from the scope of the stage.

It could then be said that spatialization and schizophony are two sides of the same coin, and that the degrees of each are directly proportional [3]. Given its negative connotations (Schafer himself said that his neologism was intended to convey a sense of “aberration and drama” [4]), it is possible to understand schizophony as the set of unwanted side effects of spatialization. It can manifest itself, then, when the use of electronics is sought not to provide localization or artificial movement to sounds, but as a tool to expand the timbre palette of traditional instruments. In that case, a major challenge for the composer is to overcome the dissociation effect and achieve a satisfactory blend between the electronic and acoustic timbres; avoid the auditory sensation of an electronic cloud floating above the orchestra when what is sought is a fusion.

How to avoid source/electronic dissociation when it is not desired? An obvious way to reduce it considerably is to place a loudspeaker next to each electronically treated instrument, but this introduces a new difficulty to overcome: the fusion, this time local, of the levels of equalization, intensity and directivity of the sound radiation. In a perfect fusion situation, the electronically expanded timbres should give the impression of being generated by the acoustic instrument itself, by means of some mechanism invisible to the audience, and even to the performer, and with its same directivity pattern. So, is electronic music possible without loudspeakers?

One of the most recent trends in music technology offers a possible answer: it is the field of augmented instruments, also sometimes called actuated, powered, or smart instruments [5]. Instead of sending the electronic treatment of an acoustic source to a set of loudspeakers, it is reinjected into to the original instrument by a series of electromagnetic devices (motors, actuators, oscillators) coupled to the vibrating element (string, membrane) or resonance body. The result is that the electronic sound induced by the original acoustic sound emanates from the instrument itself.

Detail of the piano augmented with electromagnets developed by Per Bloland

Young composers such as Per Bloland, Andrew McPherson, Neil Cameron Britt or Ralph Killhertz have begun to follow this path using pianos and vibraphones equipped with such systems. The Augmented Instruments Laboratory of Queen Mary University of London has produced instruments such as the Magnetic Resonator Piano (MRP), the vibration of whose strings is induced by electromagnets, giving a traditional piano the ability to produce crescendos, vibratos or pitch bends without the need for speakers. The Parisian start-up HyVibe markets an augmented acoustic guitar, equipped with digital effects that emerge from its soundboard, which has been enthusiastically received at the recent NAMM show.

In some cases (piano, percussion), the expansion of the timbre palette must be accompanied by an expansion of the performer/instrument interface. For example, an augmented piano capable of producing vibratos likewise requires an augmented keyboard that is capable, through sensors on the keys, of detecting continuous changes in finger pressure.

This new kind of instrument, with electronic treatment but with acoustic radiation, eliminates the need for loudspeakers and, therefore, the risk of schizophony. Still more recently, some composers have gone a step further and reveled in the new technical possibilities of augmented instruments to subvert their original goal. In his work Inside-Out, premiered in 2017, Carmine-Emanuele Cella uses sensors and actuators to force an extreme sound dissociation: the sensors attached to each of the four instruments (piano, metal plate, bass drum and gong), located in the four corners of the room, produce vibrations via actuators attached to each of the opposing instruments. Each instrument injects its sound into the opposite instrument. More than a sound separation, we are dealing here with a genuine identity theft.

Augmented instruments have allowed the creation of a new and exciting way of understanding electronic music, and ask for a new meaning of the adjective “electroacoustic”.

Notes

[1] Any amplification system involves some distortion of the original sound. This alteration is not always undesirable, as for example in the case of electric guitars, where the distortion produced by the amplifier is considered an essential part of the tone produced.

[2] Listening rooms with loudspeakers located below the audience level are very rare. Two rare examples are the listening room of the CCRMA research center at Stanford University and the Sonic Laboratory of the SARC center in Belfast.

[3] An exception: purely electronic (acousmatic) works, where there is no visual reference to the original sound sources.

[4] R. Murray Schafer: “The New Sound Landscape”, Ricordi Americana, Buenos Aires, 1998

[5] This last denomination (smart instruments), certainly vague, is often used to refer to commercial products and belongs more to the field of marketing.


Leave a Reply

Your email address will not be published. Required fields are marked *