In a week’s time In The Dark will be hosting a special listening event at the Wellcome Collection, as part of the larger Voice event. We will be curating an evening of listening that taps into our complex relationship with the voice, featuring a rich chorus of vocalisations, speech and other oral oddities. The listening event will run for approximately 20 minutes and will be repeated throughout the night (timings below) – I’ve just finished mixing the playlist and we’ve managed to squeeze in an interesting range of material, from strong narrative pieces to the more avant garde.
In addition to our own event, there’s a load of other great stuff going on under the same roof, including talks exploring the science of speech, live vocal demonstrations from yodellers and sports commentators, talking parrots and technology that will remix your voice in real time. It’s all FREE as well, so if you’re in London next Friday you may as well drop by and have a look / listen for yourself.
An audio feature I produced over the summer for Pod Academy, exploring the development of the vOICe technology and it’s impact on blind users. The vOICe is a computer program developed by dutch engineer Dr Peter Meijer which essentially converts images into sound. Through training and experience blind users can learn to interpret these sounds as a sort of ‘synthetic vision’. The piece explores the technology from the perspective of blind user Pat Fletcher, and uncovers some of the science and technology behind its use with it’s creator Dr Peter Meijer and cognitive psychologist Dr Michael Proulx (University of Bath).
It was my thought that technology and the computer would be my way out of blindness.
Essentially, the software takes spatial information captured by a camera and converts this into a coded soundscape. Users can then learn how to decode this auditory signal into a visual one thanks to a process known as ‘sensory substitution’, where information from one sense is fed to the brain via another. Fundamentally what the vOICe is doing is re-routing information usually obtained by the eyes and delivering it through another sense organ, the ears.
Although the neuroscience and psychology behind the technology is still largely unknown, it is thought that the visual cortex is eventually recruited to process the incoming auditory information and through experience, is able to decode it as spatial / visual information. There’s a great article over at New Scientist that goes into greater depth about the neuroscience behind it – including a useful diagram depicting how the technology works.
The software is currently freely available and can be used with virtually any imaging device, from webcams to camera-mounted glasses – there’s even an android version available for mobile devices! With the increasing prevalence of mobile computing, the vOICe technology is liberating users from their blindness, allowing them to step outside and experience the world through a completely new visual perspective.