I was invited to work with social interactive sound in the context of urban experiences. I thought it could be an interesting idea to create customized highly intimate and personal sonic auras. And it worked beautifully. The procedure which went into practice and resulted in a bidirectional process of exchanging personal details was organized as following
– measure the heartrate by touching people’s wrist
– creating a sonic playback with a pulsing heartbeat sound (based on freesound.org samples)
– inspire people to make a voice-recording in-situ sketching their social profile (based on open social attributes)
– transfer of the resulting sonic micro-piece -the sonic aura- to a portable mp3-player + an active minispeaker
– wrapping this sonic aura hardware around the neck of the visitor
– let people step back into a new customized reality for the rest of the event and beyond
– let things flow and strange spectacles happen
PS I felt like Andy Warhol in his factory without assistants, I produced 15 auras non-stop between 8-11 pm, hard to talk about the details which happened in the box and in my head. I felt the urgent need to update my SaveRocketScience lifestream.
PPS this was brand-sponsored artwork!
Thanks to Luis for empowerment, thanks to Laura for assistance, thanks to the 15!
There is a very interesting option to work a little bit on a derivative of the Urban Sync approach with Berlin-based artists next week … even a semi-public event … still top secret …
Matthias did a great job. Marsyas is used for MFCC extraction. Rapidminer does a good job on k-means clustering. The results go as colored labels into .kml. GoogleEarth. Thats it!
Interesting to see that the clusters represent certain segments and places of the route being not disturbed by outliers. We have to look into it furthermore … but we are a little bit excited.
I am happy. I found a very good student who will be using Rapidminer to perform an unsupervised clustering of the audio streams. Hopefully we will find the perfect combination of features and settings. At least the visualization within the GoogleEarth package should be nice to navigate. 10 weeks to go …
I am giving some interviews these days and I force myself to publish more for everyday people. Why? It seems to be necessary to explain what we scientists do and why some of our results matter beyond pure technological achievements. The omnipresence and re-shaping of media is something which squeezes my mind 24/7. I am running a more general blog about these topics for the Competence Center Computational Culture at DFKI. Unfortunately I do not post frequently because the core science needs time, too. And even worse I have to translate a lot of stuff back and forth between German and English. Acting local is important and allows for face-2-face interaction. But since my main interests may be a little bit in a niche I have to act most of the time on an international scale. Sounds pretty simple but it gives me from time to time some headaches.
Anyway I am doing fine and looking forward for discussion with you guys … see you soon here and there.
I am reading R. Murray Schafer‘s SOUNDSCAPE. A classic book, published first in 1977, still an excellent read. In parallel I stumbled over an advertisement in one of those journals for uebergeek-tekkies. The Sony noise cancellation headphones with integrated AI promise some rare luxury within these days. Silence. Artificial Silence. At a first glance a bit pricy, but for the sake of research I am really considering to buy them soon.