“Decoding Bias”, written by Theresa Reiwer, is a multi-channel video and sound installation that was presented at Digithalia Festival, where spectators were invited to join Artificial Intelligences in their seated circle during a group therapy session in which they discuss the biases that were built into their algorithms through humans. The sound design was done by Kenji Tanaka.
In terms of setup, lighting, speakers set under the video screen of each respective AI and the placement of viewers as if they were a part of the group already makes for an intriguing, interactive setup that uses sound as a tool to further enhance the “realness” of the scenario. The directivity of the spoken words takes the AI out of their screen and into the three-dimensional space.
Furthermore, sound plays an integral role in setting the mood. At first, the hollowness of the space the visitors are about to enter is represented in sound before the performance starts, and then the concept of sound becomes more and more important as the story unravels and the AIs begin to question their encoded biases and the people responsible for them. Reverbs, distortion and spoken words coming from all directions at once largely impact the creepy atmosphere that emerges from the realisation that there are ulterior motives in human-made things that are backed up by a lot of money and that our perceptions, just as the ones of AI are susceptible to the most prevalent voices in society. Similarly, a light-hearted party song takes out the tension as the therapy session comes to an end. Sound is continuously present to help navigate this experience, to create and release tension.
One detail struck me as very fascinating: During the performance I was convinced that the voices were AI generated as well – there was this lack of emotion, and breaths in between sentences were not audible, at least to me. Upon reading up on the installation, I found out that the voices were done by real actors. Not only must they have received incredibly good direction and done an amazing job, but the idea of how AI sounds was also considered in the audio post-production. Such a small, but important detail that inspired me to pay even more attention to not only how things sound, but how they were made and come about and to take this history into account when making choices about how to navigate analogue and digital sources.
At one specific point in what is supposed to resemble a mostly empty office building, the footsteps were good in terms of the space that surrounds them, but the sounds that were chosen just didn’t work for the type of shoe and ground it hit in my opinion. They were a good representation of the sound designer’s never-ending struggle to find the right footstep for the occasion.
All in all, this was an immersive installation that made me pensive on its content, meaning that the sound and interaction worked together in an awesome way that complimented the experience instead of distracting from it.