logo of Bee Arts CIC

logo for BRE4TH - a woman's face in profile breathing a stream of neon yellow, orange and pink breath logo of the Wellcome Trust

BEE ARTS COMMUNITY INTEREST COMPANY IS RESEARCHING AND DEVELOPING THE ARTISTIC POSSIBILITIES FOR A NEW WORK BASED ON THE NON-INVASIVE DIAGNOSIS OF BREATH. THIS PROJECT IS FUNDED BY THE WELLCOME TRUST.

AUGUST 2009 - MARCH 2010

Inside of the PTR-MS machineO

HOW BRE4TH WAS DEVELOPED - SONIFICATION


Initial Thoughts

The data resulting from the analysis of a breath sample using Proton Transfer Reaction Mass Spectroscopy can be variable depending on how the scan was set up, but generally speaking, mass scan data of breath would result in information about c.300 Volatile Organic Compounds (VOCs) - a large data set. Our main challenge when considering how to sonify this information was to develop a style that enabled us to bring the VOCs of interest to the forefront whilst placing the remaining VOCs in an exaggerated way to the background. The degree of exaggeration was based on experimentation in *listening* to the data alone, *looking* at the data alone as well as simultaneously *looking and listening* to the data.

We spent a long time researching and experimenting with perception of sonic information to reach this position, informed by our previous work with scientists using sonar to identify different life forms in the Southern Ocean and the very low frequencies of lightning-generated 'Whistlers' in the magnetosphere. We considered that most of the public being offered this sonified interpretation method for breath analysis would be less familiar with understanding the data by sound alone and therefore more in need of cues and assistive techniques to be able to bring a parity to their audio understanding.


Stage 1

Our initial experiments involved establishing a family of sounds for the background VOCs. Although the atomic masses of these VOCs are found across the spectrum of a mass scan, sonically it was more appropriate to build on the exaggerated foreground distance by clustering the pitches together, microtones apart. Using microtones was an idea to physically pull different VOCs together within the confines of a short pitch range knowing that the acoustic effect would be both dissonant as well as accumulative (the closeness of frequencies used combined with the number of sounds would make individual sounds hard to hear and create the effect of a larger sonic mass, a homogenous background sound world).

We wanted our audience to think and feel that a background sound was present and then to quickly ignore it and concentrate on the foreground sounds. Initial experiments for the background used slightly pitched breath recordings to establish a direct relationship between the breath sample and the sonified analysis. Recordings of pure breath exhalations (un-pitched) resulted in open sounds made up of multiple frequencies that were perceptually undefinable and therefor unworkable as a tool for the microtone cluster idea. However our recordings of slightly pitched breath exhalations worked well for this purpose and could effectively be moulded into a sound cluster.

However, as often occurs in early artistic development, the experience of listening to the final microtonal cluster created a different response than we had anticipated as it sounded unlike breath and more like the gentle humming of our favourite animal - bees...

Example 1 - Just bees



Stage 2

Whether perceived as breath or bees, these early experiments sounded 'natural' to us (in terms of the opposite of a human-made mechanical sound) and triggered a creative response to further extend the degree of separation between background and foreground in three main ways.

1 - Natural vs mechanical (timbre)
We imagined a contrast between foreground and background defined by sonic texture, and researched and experimented with perceptions of sonic texture considering the thoughts/moods/feelings our audience might associate with both natural and mechanical sounds. We wanted to utilise the kind of snap subconscious decisions that we all make daily on hearing sounds, that determine whether or not we should pay attention to them and how we should act or feel in response to them. We were keen to incorporate sound textures that triggered memories/associations that would assist in the distinction of the separation. Additional natural sounds we considered for the background included rolling thunder, used as a gentle alert (suggesting the potential danger of an approaching storm although if it was perceived to be far enough in the distance it could still be ignored); whilst mechanical sounds such as an alarm clock, seemed very effective in the foreground, cutting through for maximum attention of our consciousness.

2 - Looped vs non repetitive (phrasing)
Knowing that the breath cluster would need to be on a continuous loop and would therefor not have a defined beginning, middle and end, we introduced additional sounds to the background that were non repetitive so that listeners were gently reminded of the presence of the background and would not ignore it completely. By reminding the audience of the presence of the background (representing the c.293 VOCs present in the mass scan not picked out for attention) we could encourage the listeners to separate out and focus on the events taking place in the foreground. For example, as well as being used for their textural properties, recordings of thunder were also used to suggest the temporal dynamic that one might really experience in a thunderstorm with suggestions of waves of thunder that approach and dissipate as the storm moves overhead.

3 - 3D wash vs specifically placed (spatialisation)
We also focussed on low-pitched sounds for the background, knowing from experience that they would provide a multi-directional spread when spatialised in 3D compared to the localisation of the higher-pitched foreground sounds.

Example 2 - Bees with thunder (NB you may need headphones)



Stage 3

In composing a sound world for the foreground, we recorded a range of sounds from a prepared piano and also used extended techniques to create a range of metallic and machine-like sounds. In addition we recorded a range of found metallic sounds, many of which were digitally manipulated. As well as designing the sounds in respect of the same areas of timbre, phrasing and spatialisation, we added an important new criterion which was individuality (we needed our audience to be able to acoustically distinguish between the sounds representing each of the VOCs). Each VOC was to have a distinctive sound associated with it, that would be of equal acoustic weight to other foreground sounds but still have more weight and prominence compared to the background.

Going back to sample data sets, especially for analysis of breath over time, we thought more about how individual sounds would be feeding in and out of the overall sonification depending on the data. The compound Isoprene, for example, can be fairly constantly present however levels of other VOCs such as Acetone, can vary quite considerably. The following example was an exploration of these thoughts.

Example 3 - An exploration for foreground sonification



Example 4 - Background and foreground experiment (NB you may need headphones to hear the background..)

This test sonification represents our sonification strategy.


Further sonification developments were slowed after technical tests at the Science Museum Dana Centre revealed that the separation of sound in pitch and in space that we had hoped for were not possible to achieve using their technical setup. Given the shift in focus in our project to the series of events taking place at this venue we felt that we should concentrate less on sonification and more on the visualisation aspects of the project. However, we maintained our interest in sonification and used every opportunity we had to engage our audiences in conversations about the sonification of breath analysis data (mostly during the evening of the 'Tomorrow's Breath' event as well as in the public drop-in breath analysis sessions.



Research

Below are links to some of the sonification research that we explored:



* "Perception of timbral analogies" by Stephen McAdams & Jean-Christophe Cunibile. Philosophical Transactions of the Royal Society (vol 336), London Series B 1992 _Copyright Royal Society 1992
- webpage: http://articles.ircam.fr/textes/McAdams92a/

Interesting text re perception of timbre.




* "Human Perception of Sound" by Prof. Charles Hyde-Wright
- webpage: http://www.physics.odu.edu/hyde/Teaching/Fall04/Lectures/Phys332_Wk5.ppt

Interesting text re the physics behind peoples' perception of musical intervals and esp. microtones.




* "Psychophysiology and psychoacoustics of music: Perception of complex sound in normal subjects and psychiatric patients" by Stefanos A Iakovides, Vassiliki TH Iliadou, Vassiliki TH Bizeli,Stergios G Kaprinis,Konstantinos N Fountoulakis and George S Kaprinis1
- webpage: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC400748/

Interesting re the ability of the PTR-MS to pick up markers for Schizophrenia esp. knowing that our audience would inevitably comprise a mix of healthy and unwell people.




* "Developing the Practice and Theory of Stream-based Sonification" by Stephen Barrass
- webpage: http://scan.net.au/scan/journal/display.php?journal_id=135

Really interesting approach to sonification.