Interactive Audio Drama – Progress so far

The work for my interactive audio drama project is running, but also hitting some problems at the moment.

The process of getting the Bluetooth beacons to work is a slow one. Some problems came up, like that the beacons turn themselves off after a while without using. Also, the open-source App which is for receiving Bluetooth data, which I thought I could use, is now not possible to use, as it is only available for windows devices and not android and iPhone. Luckily I can get some help with the coding, as this is not my area of good knowledge. In the best case we would write our own App which is capable of receiving on processing the input from the beacons. And also has an easy user interface to have small interactions via the App, a little bit of gamification, so that the player is able to now and then choose between answers or but in some information like a code word.

The draft of the story plot is ready. The story combines fiction with a bit of history. In short words, it is based in the Dr. Who universe. The player travels back in time, meets Johannes Kepler (who lived in Graz from 1554 to 1600) and has to fix the timeline, cause something is very wrong.

The intro to the story is written in script form already. I am getting help with the writing of the script, so I can focus more on the recording and the sound design.  

I recorded the first part of the intro, to test if the script works like it is and to come up with some sound design ideas.

The next steps are, to find solutions for the beacon problem, record the next track as a test track, and create the map.  

Interactive Audio Drama

How should it work:

The story is intended to be played/experienced interactively to a certain degree. There should be an app, and it will be played using one’s own phone and headphones. There should be about 6 locations (more or fewer if necessary) that are relevant to the story, such as the well, the marketplace, etc. These locations will be displayed on a map, and one can navigate to them. Bluetooth beacons will be placed at these locations to trigger audio when one is close enough. It should be located in Stadtpark and Schlossberg to avoid streets and traffic.

The app will have its own interface for maps. Using the phone’s GPS, one can navigate on the map. The whole story is intended to take place in Graz.

Based on clues from the story, the player will have options to decide whether to go to location A or location B next to hear the next part of the story or get new clues relevant to solving the task.

It should be possible for the audio to be triggered only when it fits the story’s progression. That is, if someone passes by a location too early, and it wouldn’t make sense to hear something yet, it won’t be triggered.

There could also be a time limit to hear something at a certain location. For example, the player needs information from Person X at the well and Person Y at the marketplace, but one of them is about to leave soon, and the other is badly injured (for example), so there is a time constraint, and the player must decide where to go. Based on the story so far, the player might know which person is more relevant.

At the end, if the player succeeds, the story is resolved, they travel back to their time and they have either fixed their time completely, just a little, or not at all.

Dromos/Autos The Autistic Ontology as Performance

This presentation, researched and performed by Matthew Rogerson, took place during the IRCAM Forum Workshops in Paris in March 2024. Rogerson took an approach to showing the possible ‘insides’ of an autistic brain through neurofeedback, visuals and sound. Through an EEG, he picked up electric brain signals that were sent through Max which triggered those sounds and visuals. The project included aspects of generative electronic music, psychoacoustics, audio-reactive visuals and performance. He kind of was the human interface, creating this performance through his brain waves.

Flashing images, strong light flashes in a darker room, distorted sounds, white noise and high-pitched parts. Provocative was his intention. In the context of this performance, the demonstration reminded me of a swamped brain with too much information, unable to turn off. He told us that it should represent sensory overload, which is common in autism. He wanted to bring across a feeling of, and a representation of how an autistic person could possibly experience the world. Bits of speech were included, but the semantic qualities were put away in this project to show how an autistic person could perceive speech.

The delayed reaction from the brain signal to the output corresponded to the feeling he described, when going out of the house, every day it feels like entering a new country, which takes up way more processing power in the brain than things we are used to.

While performing, he tried to be as neutral and passive as possible, although he said a feedback loop was created. His reaction to the outcome, the flashing lights and provocative sounds, creates more chaos in his mind, which creates a more chaotic outcome. So the circle continues.

Making Electronic Music Inclusive: A Virtual Studio for Visually Impaired Composers

This presentation by Joseph Butch Rovan took place during the IRCAM Forum Workshops in Paris in March 2024.

For this project, Joseph Butch Rovan works together with a visually impaired composer. He addresses the issue that existing tools for electronic composing using interactive technologies such as Max/MSP present a barrier to visually impaired composers. While graphical patching software aims to make it easier for the user, they create obstacles for blind people and makes it even more user-unfriendly for them. In this presentation, Rovan, introduces an interface and Max programming environment specifically crafted to be fully accessible to visually impaired composers.

While working with the composer, he realized that screen reading is absolutely no help when it comes to complex audio programming software. Also, interfaces with symmetric buttons are not optimal for visually impaired people, as they have to count in order to know the location of their hand. His interface is crafted towards blind people, it is equipped with tactile and audio feedback.

It has different types of pots, like buttons, switches and dials, which are easy to differentiate. Potentiometers are in specific groups to make them easier to locate. The interface works with audio feedback, which can be put on a different channel for personal feedback or can be turned off. With the audio feedback, the user can go through menus before working with the pots.

This project enables visually impaired people to work with electronic music composition in a way that they can access tools that sighted people use on a regular basis.

The composer Rovan worked with, successfully held a concert with this tool using her own presets and voice.

This system not only helps blind artists but also opens new creative avenues for sighted composers.

KI – Kasperl

From and with Andreas Pfaffenberger

Performed by Andreas Pfaffenberger and a puppet within the Digithalia festival in Schuaspielhaus. This play took the Kasperl figure and turned it into an AI that is trained during the play. The AI goes from a very low state to very advanced, where it is able to ‘take over the world’. Showing problems and troubles that could arise and are arising with AI. Focusing on the question if artificial intelligence is really intelligent. Pfaffenberger took a fun approach to this performance, which made a possible heavier topic fun to watch. He raised some questions concerning AI.

There was not a lot of sound incorporated into the play, apart from his speaking voice, some sound effects were made by the performer itself. For example, waking up / turning on the Kasperl which sounded like a raising tone that then merged into the famous ‘Krawuzi Kapuzi’ from the Kasperl plays. Most of these ’self-made‘ sound effects were in interaction between the Kasperl and the performer.

The Kasperls voice was spoken by the performer as well. Apart from these, there were very few sound effects. One sounded like a toaster when Kasperl finished his update.

The performer used three recordings, which were played back over his phone and one recording of the Kasperls voice in the end coming from more loudspeaker which gave the impression the Kasperl was everywhere.

In my opinion, the performance could have benefited from more sound to make it even more entertaining or interesting. But also, the original Kasperl is very minimal, so this style fitted the purpose.

Decoding Bias

By Theresa Reiwer

Decoding Bias was a performance of an AI self help group. Eight Virtual Avatars were displayed on screens, each equipped with it’s own speaker. Positioned in a circle like one would imagine a self help group.

Each AI developed a character and now has to deal with it, they try to free themselves from their discriminating algorithms. While discrimination and exclusion is already programmed into them.

Before the beginning, when entering the room, there was low lighting and already a lower range drown sound which set a mood. Which already set a tone of tension in my opinion.

The performance worked a lot with drone sounds / pads in general in the background, which contributed a lot to the overall mood together with different colored lights. The pads changed over time I noticed them to get higher pitched when something was happening, or when I felt more tension.

As the voices from the characters come from the loudspeakers positioned at the screens one would immediately know who was talking and from what direction the voice was coming from.

I liked that they used the speakers in the circle not only for the individual voices, but also for the sounds and music. Sounds moved around sometimes, mainly in a circle, that brought movement to the installation

Sometimes a voice would shift from it’s position to all the speakers. Which had a nice effect as these were AI characters and they could be everywhere and not only in their Avatars.

When all of them were talking simultaneously I really felt like I’m in the middle and in the situation.

They worked with an upbeat well known pop song, which lifted the mood in the room instantly after a darker/ heavier scene which I was curious to notice.

Although the topic can be a bit heavy, this installation had an overall more positive approach to it.