‘Song for Aberfan’ – A practice-led investigation of the interplay between musical performance, film and narrative

by BEN EYES

Background

Music and film are both time-based media. They evolve over a period of time and have duration as a dimension. This allows artists to explore how they may interact with one another, in parallel, as part of fixed pieces or as performances.  ‘Expanded Cinema’ was the name given by Gene Youngblood (1970), in his book of the same title, to the exploration of the interplay between the visual and the aural in the 1960s and 1970s, through the use of experimental film, music and performance. Pieces such as Robert Whitman’s ‘Prune Flat’ (1965) explored the use of projection, sound and live performance. Today we take for granted the use of moving image within a musical performance context. Composers such as Max Richter, ‘Waltz with Bashir’ (2008) and Mica Levi, ‘Under the Skin’ (2013) perform their scores live with full orchestras and projections of the films onto the stage.  Bands such as Massive Attack are collaborating with filmmakers like Adam Curtis to create large-scale audiovisual shows which are a fusion of live musical performance, video and documentary (Massive Attack v Adam Curtis, Manchester International Festival, 2013).

Unlike cinema, where sound, music and film are closely intertwined to reinforce narrative structures, live musical performances usually lack narrative coherence. Often the use of moving images is arbitrary, with little interplay between moving image and the musical performance presented.  Within the realm of electronic music the VJ (video jockey) is often observed playing back a seemingly random set of video clips with little or no connection to the musical content or performance itself.  This is true for many sub-genres of electronic music from art music to ambient and techno. This creates a contextual distance between the music, video material and audience and, I would argue, fails to provide a satisfactory performance. Many have investigated the idea of expanded cinema and the accompaniment of silent film in musical performance: from the very beginnings of cinema, with a pianist providing a silent film accompaniment, to artists such as Robert Whitman and Ryoji Ikeda creating audiovisual work with closely synchronized visual material.

To explore this further I created an audio-visual piece that investigates the close relationship between narrative, film and live musical performance using electronics and live musicians.

Aims

The aim of the piece is to ask how we might take a subject such as the Aberfan mining disaster, and bind sound, musical performance and film together to enhance the narrative aspect of the original film and to create a compelling, immersive experience for the audience. The tragic mining accident at Aberfan village, in which a spoil heap collapsed onto a school and killed 116 children and 28 adults, was taken as the difficult subject matter. I wanted to take this difficult topic and place the audience directly into the events giving them a sense of the claustrophobia of the steep valleys and the collapse using sound, video and poetry.

Main contribution

Method

The film was firstly created using archive footage from the scene of the Aberfan mining disaster in Wales, 1966. The footage was edited to create a narrative, aligned with the events of the day and the impact the accident had on the community of the village. The completed film was then used in the studio to create a fixed structure which the musicians could follow and improvise around. Musicians were asked to watch the film and respond to it musically. Whilst they were being recorded, live electronic effects such as reverb and delay were applied to their instruments. Choices of sound treatment were made carefully to capture a sound (if not aesthetic) of the time, so digital recreations of spring reverbs and tape delays were used as these were treatments available at the time of the disaster. Instruments played and recorded were: erhu (a bowed two-stringed instrument), amplified springs of different sizes and amplified Double bass.

A narration of the poem, Aberfan: Under the Arc Lights by Rhys Keidrich was also added to the piece and performed by Tym Dylan. The poem was edited to fit the piece, with parts placed before and after the collapse segment. All instruments were processed in real time and set in an artificial acoustic space using delay and reverb to create a specific sound world. This space was designed to reinforce the idea of the reverberant nature of the steep sided valleys depicted in the film and add a despondent and eerie mood and texture to the instruments. The live performed instruments were similarly treated with reverb and delay which helped the performers to get a feeling of atmosphere and allowed them to enter the sound world.The instrumental recordings were then edited and various overdubs of synthesizers and guitars were made. As a performance, the video is played on a large screen and the instrumentalists respond to the video in a structured improvisation. Real time effects and processing are applied to all instruments. The narration can be performed live or included on the backing.

To provide an immersive experiential state, the audio is presented either in ambisonic format which offers the possibility of periphony and gives a 360º perception of sound or in 5.1 surround sound which surrounds the audience horizontally. The audience thus experiences sound from all directions and the mix of the piece has been made to take advantage of this system, moving certain sounds around the audience and at certain moments coming from above and below. Several live performances of the piece have taken place, in both the Lyons and Rymer concert halls at the University of York.

Results

The performances themselves, although satisfactory, presented a number of problems. The first being that the acoustics of the Lyons (a long reverb time and large amount of mid-band resonance) made the narration of the poem hard to hear clearly, which is obvious on the video recording of the piece. The other problem was that of the visual cues from the video not being clearly seen by the ensemble.  Reflecting on the performance, it would have been more satisfactory to have the erhu played live, as this forms the main structure of the piece and gives it a haunting, otherworldly quality. Due to availability however, the erhu part was played from hard disk. As Emmerson states: ‘The fixed nature of the electroacoustic part means that in many cases – unless the composer does not consider synchronisation of live and tape important – the tape part is a dictatorial and perfect metronome (beating clock time).’(Emmerson, 2007: 108). However, the piece is in a loose time, having no fixed time signature, with a pulse sometimes added by the spring percussion.  This allows the musicians to move around the fixed erhu part, and to improvise within the four sections.

Conclusions

Creating a video as score and then developing musical performance synchronized to the film allows new forms of multi-modal performance to be developed. The final performance is closely synchronized to the film, far closer than if videos are created to accompany performances separately. This multi-modal way of working allows new avenues of creativity and an increased link between the narrative and audio-visual material.

Notes

Address for correspondence: Ben Eyes, Department of Music, University of York, York, North Yorkshire, YO10 5DD

Email: ben.eyes@york.ac.uk.

References

Emmerson, S. (2007). Living Electronic Music. Farnham: Ashgate.

Massive Attack v Adam Curtis. (2013, July 4). Live performance at Manchester International Festival, Mayfield Depot, Manchester.

Youngblood, J. (1970). Expanded Cinema. New York: P. Dutton & Co.