For my sound in space quadphonic assignment, I mostly took the opportunity to experiment with techniques that I hope to use in my final. I am in another class, Choreographic Interventions, which combines dance and technology, particularly focusing on uses for the Kinect. Since my group for that class's final wants to focus on building a soundscape that reacts to dancer's movements, I want to combine my final projects for the two classes. So this quad assignment is also the first iteration of the sounds for that project as well.

My Javascript (Kinectron + p5 + ToneJS) sketch uses one Kinect and allows the dancer to manipulate three different sound samples. One is Morton Feldman's "Three Voices" (1982) (listen here), the others are an electricity crackling sound effect and a joint cracking sound effect, both from https://freesound.org/. Bringing certain joints together turns the "Three Voices" or the joint cracking sounds on or off. The electricity sound runs continuously, but gets louder or softer depending on how close the dancer's hands are to their face.

I want to utilize position tracking in the final to move sounds around the space depending on the performer's positions within the venue, so my focus in terms of using quadphonic sound was to experiment with this location-based movement. The sketch follows the center of the dancer's body relative to the center of the space, and uses this position to pan all the sound samples around the space. So, when the dancer moves to the left of center, the sounds should pan left, and the same with right, front, or back. I wasn't able to get the GridMultiChannelOutput to talk to all four channels of the audio interface I was using, but fortunately before that feature was available I had written some code that used trigonometry to map the position to a 4-channel MultiChannelPanner, so I was able to use that method instead.

Below is my video documentation. The sound quality isn't that great and I think the camera I was shooting on may only have mono audio, so I'm not sure the panning is that audible. But hopefully you can hear some changes in the sound as it pans around.

Demonstrating the setup.

You can see three of the four speakers in that video, the other is offscreen to the left. You can see it in the photo below (it's blending in with my black backpack in the corner), along with the Kinect and my Windows laptop, which was streaming Kinect data over the network to the Macbook that was running the sketch. I streamed the data using Kinectron server.

The front two channels of my test setup, along with the Kinect.
A closer view of the Kinect, and Kinectron server showing the skeleton data (red dots, one per "joint") that it's streaming.

Finally, you can see a little bit of how the position-based panning is calculated in the video below. To help me calibrate my panning in the space, I added some quick p5 code that shows a red dot on a circle. The placement of the red dot on the circle tracks the panner position - it uses the same angle theta that is calculated from the x and z (y is up-down on the Kinect, z is depth so it is front-back in the space) position of my body in the Kinect data.

The panner position moving around the circle as I move. (Poor quality video, sorry, but you get the gist.)

You can download the full source code of the sketch from Github.