It feels like many of us are doing a lot of listening at the moment, and according to Zoom “Audio is the most important aspect of your Zoom Room!”, so why does it all sound so bad? The compression algorithms employed by platforms such as Zoom and Microsoft Teams are predilected towards the production of highly standardised, compressed, and normative listening experiences. The functional rationale for the audio processing is to optimise speech intelligibility. Drawing on knowledge long used in hearing aids, telephones, and audio file compression formats such as MP3, the online platform default is to cut out background noise, filter sounds outside of the range of human speech perception, and take out any information that isn’t needed so that data can be transferred as efficiently as possible. When machines make choices about what is important in our experience, we need to ask what is being lost as well as what is being gained?
A long and common complaint amongst hearing aid users is that music often sounds bad through hearing aids and that although it’s good to understand what another person is saying, it’s also very important to engage in non-semantic listening as well. When working with the deafblind charity Sense, audiologist Donna Corrigan explained that some of her patients found the machine-listening used in hearing aids problematic. For example, devices which analyse a space and cut down on reverberation left some users feeling claustrophobic and were problematic to deafblind users who might use the reverberation of a space to help navigate and to gain a sense of presence. Mara Mills and her colleague Jonathan Sterne are often keen to point out the intimate relationships between developments in audio technology and sensory loss. Platforms such as Zoom have accelerated accessibility tools into the classroom and into the mainstream consciousness, and whilst wider adoption of practices such as captioning and audio description are positive, when these are delegated to machines we have to ask what is at stake?
The focus on semantic listening in online learning over other types of listening, such as musical or reduced/everyday listening, also represents a highly controlled approach that negates the importance of sounds outside of human speech. This machinic compression and normalisation of our listening environment is not only hard work for our ears but also further promotes the dominance of language in education, marginalising much of the learning that occurs through non-semantic means. It also marginalises the many disciplines that use listening and the sonic outside of the constraints of language.
Sounds of the lecture theatre
It didn’t take long for many of us to start to miss the familiar, highly varied, and often noisy sonic environments that make up many of our lives. During the first few months of lockdown numerous and sometimes, highly creative projects were either initiated or surged in popularity. Examples include Yuri Suzuki’s Sound of the Earth: The Pandemic Chapter. @RadioLento and Ambient Isolation.
My personal favourite was by a student at the Royal College of Art, ShangYun Wu who made an audio recording of a university library as part of a web VR experience. The project allowed you to listen to somebody read a book to you from the library collection, but also allowed you to just sit in the sonic ambience of the library whilst reading your book at home. These manifestations highlight the importance of soundscape and environmental listening but they also point to the continuation and acceleration of a problematic trend of the use of environmental sound as a tool of control and pacification. Sound of Colleagues allows you to listen to office sounds during lockdown as a way of helping you through the workday. Positively, the project tacitly acknowledges the importance of environmental sound in our lives and also reminds us that sound connects us with others through many sonic gestures, such as stirring coffee or rustling paper. However, it could be argued that the use of sound to aid productivity and pacify is highly problematic. Mack Hagood, in Hush, charts the history of the use of machines such as white-noise generators to combat tinnitus and of nature recordings to aid sleep and distract us from noise such as traffic. Importantly he questions the links between technology and the neoliberal programme.
the very presence of a technological “solution” to this problem of conflicting freedoms reinforces the essential neoliberal belief that problems must be solved individually and within the market rather than addressed as systemic issues: individual consumption, rather than collective action, is the site of social agency.
For Hagood the many instances where sound is used to mask or pacify, disconnect us from communal listening and reduce complex, affective responses to sound to normative prescriptive experiences. Some would see this as pacification, which in the case of corporate environments, serves the neoliberal programme well, helping us maintain productivity through cybernetic interface in order to find meaning, purpose, and the illusion of freedom. This reductive thinking can be found in the niche but ubiquitous area of Audio Branding. Audio Branding authority Julian Treasure has long been pedalling the idea that playing classical music in offices increases productivity and he also promotes approaches such as playing the sound of the seaside in a seafood restaurant to enhance the experience of eating seafood. Whilst the acknowledgment of experience as multi-modal is important, the reduction of both music and nature to functional objects is oppressive.
As we accelerate towards technologically mediated learning environments, with algorithmically curated music and soundscape playlists at hand to enhance and distract, let’s not forget that listening can tell us a lot and that we all listen differently. Listening to online learning, I hope, reminds us of the importance of creating non-prescriptive, open systems whereby the listener is free to find meaning, use, and affordance. The job of students and teachers then is to test the affordance of these systems not just to be complicit in their hegemonies of control. The continued commodification of our learning spaces through an attempt at an optimised listening and learning experience reduces experience and our complex relationships with each other and the natural world to their lowest common denominators and the use of environmental sounds continues the process of colonisation of the non-human world by taking its sounds.
Dr Matt Lewis is a musician and sound-artist based in the UK, he is co-founder of Call & Response, one of the UK’s only dedicated independent sound art spaces. He has a PhD in Music from Goldsmiths and teaches at the Royal College of Art in London where he leads the Sound Pathway on Information Experience Design and is a Tutor on MA Digital Direction.