“ontextC” – Technical Diary 5

What happened so far?

Over the end of the last semester and the summer, implementation became the main topic for the process. I managed to find decent placeholder models for the EQ, pitch shifting and reverb unit in the Max default resource examples: (in the ‚pitch and time‘/‘effects‘ folder, access by right click > object > open.amxd). With these, I did some testing using exports from the original Paulstretch software to make sure the results could work in the context of what I am trying to create.

Although initially I was headed towards just slightly modifying the phase vocoder that is available for Max, I realised that for my understanding of the algorithm and Max itself it might be better to start and troubleshoot from scratch, to get a result that I could fully explain and modify as needed. To do so, I used my Python analysis and the available Github repository to break down the most important steps of the algorithm (to recap in overview terms: Fourier Transform > Windowing Function > Spectral Processing > Inverse Fourier Transform > Interpolation and Smoothing) in terms of understanding, but also mathematically so I would be able to send the signal through the correct processing chain in Max for the output I am looking for. This also required me to go back into my mathematical education a little bit in order to properly understand what I was working with.

Ultimately I aimed for 4 manually changeable parameters for now: Window size (to control spectral resolution), Overlap (to control overlap between windows), Stretch factor (the most important one) and a Smoothing parameter which is supposed to help create a smoother output with some to few artefacts.

For Future Reference

Another new consideration that came up during this process was that it might be useful to have a tuner of some sort integrated tob e able to tune the edited audio as needed for the current project. However, this is not a priority right now.

Ongoing

I am currently also trying to plan first listening experiences, to be able to test my prototype in the future. My supervisor suggested I look into webmushra to set up listening test scenarios, and another idea was to set up a sonified „Find the Mistake“ station at the exhibition so people could playfully get results for me to evaluate, in a less controlled context of course.   

Results and Reflection

The stage of the project I am in right now is not the most rewarding in that I don’t get any immediate results at the moment, as I am setting up and testing the patch based off my notes and the process I noted down fort he audio signal, but I know it is essential to create a sounding prototype and am hopeful that it will pay off. Either way, I have learned a lot about digital signal processing during my research for this phase of the project, which is always useful.

Objectives for Next Time

  • Get sound through the new signal chain
  • Come up with test scenarios and mockups
  • If I get that far: Try to get reloadable presets set up

11 | Review: Feel What You Don’t Hear – A New Framework for Non-aural Music Experiences

Short Description

Aoi Uyama, Danny Hynds, Dingding Zheng, George Chernyshov, Tatsuya Saito, Kai Kunze, and Kouta Minamizawa. 2023. 
Feel What You Don’t Hear: A New Framework for Non-aural Music Experiences.

The research aims to enhance the music listening experience by providing audiences with a physical representation of the performer’s introspection, including elements like rests, groove, and rhythm. This involved analyzing the performer’s physiological responses such as breathing and heartbeat and presenting them to the audience alongside the music performance.

Method:
Preliminary experiments were conducted to extract the performer’s introspection, and a device was designed to present it to the audience. Physiological data, including heartbeat and respiration, were recorded during a piano performance of Debussy’s „Clair de Lune.“ The performer also provided subjective interpretations of the piece, which were translated into vibrations and force feedback.

Results:
User tests with seven participants yielded positive feedback, with participants reporting a more engaging experience when using the device. However, some users found the tactile sensations distracting, indicating room for improvement. The performer noted the importance of physical movements during performances and expressed the need for flexible communication methods to convey introspection sincerely. Their future work aims to refine the system based on user feedback and explore its application with a broader range of performers and audience members.

Review

The research presents a promising approach to enriching the music listening experience by allowing audiences to feel the performer’s introspection physically. The listener becomes more deeply engaged in the musical experience and can establish a new connection with the musician through additional feedback. Particularly for those who have never played an instrument, this offers the opportunity to understand what it feels like to make music. It aims to involve the body not only audibly but also sensorially – to comprehend the sensations experienced during music-making.

This approach is certainly not perfect yet, as indicated by the conclusions. Some test subjects were distracted by the vibrations (which translate the physical sensations of the performers into tangible output), as they did not align with their own emotional experiences while listening. However, further approaches to transferring feedback to other output mediums suggest that there are ways to make the experience less intrusive. In particular, the idea of outsourcing the feedback to a medium not directly connected to the body – like the pillow-like object in the sketch – seems promising. This way, the listener can decide when they want to receive non-auditory feedback and when not.

Furthermore, I see great potential in providing deaf individuals with a new way to perceive music. There are already some devices that provide vibration feedback in response to music, and this experiment confirms that there are further possibilities to enhance immersion. If this idea becomes market-ready in the future, I could imagine it being used at concerts to show live audiences how playing different instruments elicits various physical reactions.