For my stereo assignment for "Sound in Space," I continued to experiment with transmission of sound through the body. I also wanted to add a performance component and to continue exploring what I call "radical presence." "Radical presence" is my attempt at using art to intensify awareness of the presence and being of other people, and generally involves creating unusual situations of intimacy. My idea in this case was a performance in which one channel transfers the sound of my heartbeat into the listener's body via a platform, while the other channel uses the bone conductive transducer I used in my mono assignment to transfer a sonification of my brainwaves into the listener's head.

The first channel in the stereo setup was provided by a 4 ohm, 25 watt surface transducer attached to the bottom of a wooden platform. The listener (the piece is meant for one listener at a time) is instructed to lay down on their back on the platform with the green tape aligned with their sternum - this places the transducer directly under the center of their chest. In testing, I found this placement of the transducer to be the most immersive, making the sound feel resonant throughout the listener's torso as a whole, rather than localized to a specific location along the back. (The metaphor that the heartbeat sound is literally going into the listener's heart was unintentional, and I find it somewhat cheesy, but it's worth acknowledging.)

The platform with the surface transducer on the bottom
The transducer attached to the platform.
The setup attached to an amplifier.
The setup playing sound. (It's especially good with bass-heavy sounds.)

The sound source for this transducer was a small microphone which was placed inside one earpiece of a stethoscope. The microphone was wired on a breadboard to an RCA jack, and connected via an RCA cable to one input of an amplifier, which powered the transducer. This entirely analog setup amplifies the heartbeat and other sounds from the stethoscope, and sends them into the platform where they can be heard but more intensely felt throughout the listener's body. Although the platform also acts as a speaker, low-frequency sounds and the heartbeat in particular don't sound very loudly out of the platform when a listener's body is on it. However, they can be felt very strongly by the listener.

The microphone, placed inside the rubber tip at the end of the stethoscope (and secured with hot glue).
Finally, the microphone is covered with some acoustic foam and breadboarded to 3.3V power and to an RCA jack.

The other channel of the stereo was sonification of live EEG data from a Muse headband. I sonified the data using ToneJS. I couldn't treat the data directly as sound due to the format in which I had access to it - namely, as a stream of OSC messages coming from the Muse's proprietary software. Each OSC message was one data point in the EEG reading, and although the software claimed a 220 Hz sample rate, in practice it sent ~190 messages per second. 190 Hz is enough data to capture the frequencies that scientists generally care about in an EEG waveform - as those are generally 100 Hz or below. However, since the data was coming in as individual messages rather than a fixed bitrate stream, and most of all since the sample rate was so much lower than standard audio bitrates, I didn't know how to convert them directly into a waveform. Perhaps there is some interpolation technique for converting up in bitrate, but I don't know it.

I worked with the data by buffering 256 messages at a time and then applying a fast Fourier transform to get frequency band information. I used 256 samples because I wanted at least 200 samples in order to capture frequencies up to 100 Hz, but the FFT library I was using (DSP.js) required that sample size be a power of two. I tried then using additive synthesis to generate audio directly from the FFT. That is, I pogrammed 100 sine wave oscillators with frequencies from 2-100 Hz, then adjusting their relative volumes according to the result of the FFT. (1-100 Hz is the frequency range used in EEG analysis, but I was consistently getting FFT values at 1 Hz that were a few orders of magnitude higher than any other frequency, so I assumed that must be an artifact of the process and ignored the 1 Hz band.) The result was a white noise field that sounded like EEG (e.g., similar to this video: https://www.youtube.com/watch?v=y1Nl3De_frM), but because I was adjusting the volume once per second, the result had a repetitive quality that seemed to beat in time with each 256-sample adjustment. I didn't like this repetition: it hardly sounded like the data was changing at all.

Instead, I settled on a result based on sonifying the five EEG frequency bands that most research divides readings into: Delta (0-4 Hz), Theta (4-7 Hz), Alpha (8-12 Hz), Beta (12-30 Hz), and Gamma (30-100 Hz). The different bands are associated with different sorts of brain activity such as sleep, anxiety, or focus. From the FFT spectrum, I derived some summary data about each band, namely the weighted average frequency within the band and the total energy in the band relative to other bands (and normalized for the varying widths of the bands). I used these measures to adjust parameters on five sine wave oscillators. The frequency of each oscillator is controlled by the average frequency within the band, but because EEG data is so low-frequency, I mapped the frequencies up so that each band corresponds to one of the octaves between A1 and A6. The relative total energy in the band controls the volume of each oscillator. My final code is up on Github. You can listen to a sample below:

It's not very pleasant sounding, which is okay with me as I like the idea of my head sounding noisy and anxious, but I wasn't too happy overall with the result. You can hear the pitches shift and get louder and softer as my brainwaves change, but its still very monotonous, and the high frequency dominates the entire time. Still, I was out of ideas and time at this point, so I went with it.

The EEG sonification channel was connected to the bone conductor transducer that I had picked up for my mono assignment. After lying down on the platform, the listener was to place the bone transducer on their forehead. I also provided a bandana to tie the transducer tightly against the forehead, so that the listener doesn't have to hold it there.

Everyone who tried out the setup seemed to both enjoy, and be very surprised by, the experience. From trying it out myself, it is a very unusual and immersive bodily experience for sure. The heartbeat channel through the platform really carried the weight of the project, which makes sense as it is so much more intense shaking the entire body in that way. I think that I succeeded in giving people an unusual and totally new perspective on how sound can travel through and be felt in their body, so I'm quite happy with the result. Below are some pictures and a short video from the performance in class.