Sound & Interaction in „Decoding Bias“

“Decoding Bias”, written by Theresa Reiwer, is a multi-channel video and sound installation that was presented at Digithalia Festival, where spectators were invited to join Artificial Intelligences in their seated circle during a group therapy session in which they discuss the biases that were built into their algorithms through humans. The sound design was done by Kenji Tanaka.

In terms of setup, lighting, speakers set under the video screen of each respective AI and the placement of viewers as if they were a part of the group already makes for an intriguing, interactive setup that uses sound as a tool to further enhance the “realness” of the scenario. The directivity of the spoken words takes the AI out of their screen and into the three-dimensional space.

Furthermore, sound plays an integral role in setting the mood. At first, the hollowness of the space the visitors are about to enter is represented in sound before the performance starts, and then the concept of sound becomes more and more important as the story unravels and the AIs begin to question their encoded biases and the people responsible for them. Reverbs, distortion and spoken words coming from all directions at once largely impact the creepy atmosphere that emerges from the realisation that there are ulterior motives in human-made things that are backed up by a lot of money and that our perceptions, just as the ones of AI are susceptible to the most prevalent voices in society. Similarly, a light-hearted party song takes out the tension as the therapy session comes to an end. Sound is continuously present to help navigate this experience, to create and release tension.

One detail struck me as very fascinating: During the performance I was convinced that the voices were AI generated as well – there was this lack of emotion, and breaths in between sentences were not audible, at least to me. Upon reading up on the installation, I found out that the voices were done by real actors. Not only must they have received incredibly good direction and done an amazing job, but the idea of how AI sounds was also considered in the audio post-production. Such a small, but important detail that inspired me to pay even more attention to not only how things sound, but how they were made and come about and to take this history into account when making choices about how to navigate analogue and digital sources.

At one specific point in what is supposed to resemble a mostly empty office building, the footsteps were good in terms of the space that surrounds them, but the sounds that were chosen just didn’t work for the type of shoe and ground it hit in my opinion. They were a good representation of the sound designer’s never-ending struggle to find the right footstep for the occasion.

All in all, this was an immersive installation that made me pensive on its content, meaning that the sound and interaction worked together in an awesome way that complimented the experience instead of distracting from it.

Sound & Interaction in “Ein Flanellnachthemd”

“Ein Flanellnachthemd”, written by Leonora Carrington and staged in augmented reality in a collaboration between Augsburg State Theater and Ingolstadt State Theater, was presented at Digithalia Festival within the confines of one portable electronic device pointing towards a poster in which all the action takes place. The keywords for this play are surrealism, morbid interactions, and nightmare. It is evident from the beginning that the atmosphere is meant to be unsettling.

This is on one hand represented by the costumes and interactions that the actors have on the augmented reality stage, but mostly through the sound: A deep, dark pad texture somewhat close to being a constant in all of the house’s rooms follows the spectator through the narrative. There are diegetic sounds as well – droplets in a bathroom, doors creaking upon being opened and closed, footsteps, and fire in a kitchen where a murder took place. Despite all this, what was done with sounds seems minimal compared to the potential there is: A flood outside the window, a huge black swan made of paper, a tree growing inside a bedroom, a hyena crouching in the corner, and a crocodile in the bathtub. In my opinion, doing more here could have enhanced the experience of the surreal in these scenes, instead of simply brushing over them visually or maybe not even noticing them (i.e., if I already move my device to follow the dialogue and do not randomly move it up to see the leaves growing on the ceiling, will I even get an understanding of the absurdity beyond noticing it on a surface level?).

One factor that severely impacted the interplay between immersion and sound was the mix. Between dialogue, atmosphere, and switching to another poster, I found myself adjusting the volume multiple times to be able to take note of everything that was going on. This is something that I would want to focus on to create good continuity and cohesion in an experience that is already so bizarre without any additional disturbances (especially if the presentation does not take place in one continuous stream). If there’s already enough hassle with tilting the camera to witness what is happening in the story, having to adjust levels on top of that is just an additional distraction.

I really liked the music that accompanied the play. It was evolving and atmospheric, and although it was similar in most rooms, it fulfilled its purpose very well. One main lesson I learned from taking part in this critically is that if there are visual elements in augmented reality that add to the experience, but are not directly referenced in the spoken dialogue, it could make sense to use sound to draw the spectator’s attention to the visual input coming from another direction – it doesn’t need to be super obvious, but a subtle hint helps to take in the whole scenery.