A short, meditative film I directed and produced with Professor Nicholas Humphrey exploring the scientific significance of consciousness and the problems we face in understanding its existence.
After working with each other last year Nick and I were keen to explore consciousness in a short form piece – quite the challenge considering the complexity of the subject matter.
Our intention was not to be too heavy handed with the facts and figures, but instead to present the viewer with some of the key questions and problems that scientists face in understanding consciousness from the perspectives of evolution and neuroscience.
One of the greatest challenges with this piece was always going to be in constructing compelling images to go alongside the narration and pieces to camera. It was with this in mind that we chose the Botanic Gardens as the lush and colourful backdrop in which to explore these ideas against.
The film was shot primarily on a Canon 6D over a couple of days, on location at Cambridge University Botanic Gardens and at the Royal Institution. I was really impressed with the footage coming out of the 6D (aside from a few moire problem) and it received very little grading. I also paid a little extra attention to the audio, mastering it outside of FCPX and in Ableton Live – just to give it a bit more polish!
Last year, as part of an AHRC funded project, I was commissioned to make a short experimental audio documentary on the subject of silence. I was given freedom as to how I explored this subject and so I set out to capture the thoughts of those who worked with sound and in silent spaces.
The result, unsurprisingly, was that silence meant lots of different things to different people and so thematically it was very noisy! This relationship between noise and silence was one I was keen to explore through the production and so the piece is filled with hiss, distortion and feedback in an attempt to echo the noisy subject matter. This was explored further through the use of interviews but also with extracts of the poem ‘Describing Silence’ which are intercut throughout. This piece written by James Wilkes was a response to his time spent in total silence and explores some of the self generated noise born out of silence.
The audio work was an artistic output for an Arts and Humanities Research Council funded project exploring the role of silence in academia and other professional fields. The project was run by the Science Communication Centre at Imperial College London and the piece was featured at one of their events.
The piece features interviews with Sophie Scott (cognitive neuroscientist), James Wilkes (poet and writer), Sara Mohr-Pietsch (BBC Radio 3 presenter), Cheryl Tipp (Natural Sounds Curator, British Library) and Vidyadaka (London Buddhist Centre).
The idea of distortion and noise influenced the production from the early stages and as work continued I really wanted to create an intense build up of noise that would level off and really help mark the silence experienced later on in the anechoic chamber.
The piece written by James Wilkes ‘Describing Silence‘ – can be heard in full below:
The interview and reading from James was recorded in an anechoic chamber based at UCL. The space itself is very strange to stand in, the best comparison I can think of is what happens to your hearing when you travel in a pressurised aeroplane. In terms of recording audio in there, it was actually a pretty boring space to record in!
Although it did crop up in several interviews I was keen to avoid referencing John Cage’s 4:33 – there are some great pieces on this already (particularly here: http://www.thirdcoastfestival.org/library/1258-john-cage-and-the-question-of-genre) and it justifies a much longer discussion than I could have accommodated for it.
The piece was recorded on a Zoom H4n and a Marantz PMD661 with AKG D230 dynamic microphone. It was edited and composed in Ableton Live.
An audio feature I produced over the summer for Pod Academy, exploring the development of the vOICe technology and it’s impact on blind users. The vOICe is a computer program developed by dutch engineer Dr Peter Meijer which essentially converts images into sound. Through training and experience blind users can learn to interpret these sounds as a sort of ‘synthetic vision’. The piece explores the technology from the perspective of blind user Pat Fletcher, and uncovers some of the science and technology behind its use with it’s creator Dr Peter Meijer and cognitive psychologist Dr Michael Proulx (University of Bath).
It was my thought that technology and the computer would be my way out of blindness.
Essentially, the software takes spatial information captured by a camera and converts this into a coded soundscape. Users can then learn how to decode this auditory signal into a visual one thanks to a process known as ‘sensory substitution’, where information from one sense is fed to the brain via another. Fundamentally what the vOICe is doing is re-routing information usually obtained by the eyes and delivering it through another sense organ, the ears.
Although the neuroscience and psychology behind the technology is still largely unknown, it is thought that the visual cortex is eventually recruited to process the incoming auditory information and through experience, is able to decode it as spatial / visual information. There’s a great article over at New Scientist that goes into greater depth about the neuroscience behind it – including a useful diagram depicting how the technology works.
The software is currently freely available and can be used with virtually any imaging device, from webcams to camera-mounted glasses – there’s even an android version available for mobile devices! With the increasing prevalence of mobile computing, the vOICe technology is liberating users from their blindness, allowing them to step outside and experience the world through a completely new visual perspective.
Using a new imaging method scientists from the University of Manchester have constructed a three-dimensional sequence of the brain as it loses consciousness. The small study used a new technique called ‘functional electrical impedance tomography of evoked response’ (fEITER), which is basically a new way of measuring changes in the brain’s electrical conductivity. This is useful because changes in electrical conductivity are believed to reflect changes in the brain’s electrical activity and by knowing where abouts in the brain this activity is occurring, we can better understand how the brain operates under different conditions; in this case unconsciousness. What’s great about this new method is that it has an extremely fast response, performing it’s imaging process 100 times a second, allowing the team to monitor the brain’s activity in real time!
The team used fEITER to scan the brains of 20 healthy volunteers as they were administered an anaesthetic and imaged changes in the brain’s electrical conductivity as it moved from a conscious to an unconscious state. The team found that a loss in consciousness corresponded with changes in electrical activity deep within the brain. The findings support a theory proposed by Professor Susan Greenfield which suggests that consciousness is formed from the unhindered communication between groups of brain cells called ‘neural assemblies’. The findings appear to show that when someone is anaesthetised, these small neural assemblies either work less well together or inhibit communication with other neural assemblies.
The use of fEITER is a great advance and a first for neuroimaging, allowing scientists to witness the brain’s transition into unconsciousness in real-time. However there is still a lot of work to be done to understand exactly how and why the brain behaves differently in an unconscious state. The fEITER device used will also have a significant impact in many areas of medical imaging, but will be particularly useful in helping us understand anaesthesia, sedation and unconsciousness. Perhaps the most useful application of this device will be in diagnosing neuronal changes which occur in head injury, stroke and dementia patients.