“ontextC” – Technical Diary 5

What happened so far?

Over the end of the last semester and the summer, implementation became the main topic for the process. I managed to find decent placeholder models for the EQ, pitch shifting and reverb unit in the Max default resource examples: (in the ‚pitch and time‘/‘effects‘ folder, access by right click > object > open.amxd). With these, I did some testing using exports from the original Paulstretch software to make sure the results could work in the context of what I am trying to create.

Although initially I was headed towards just slightly modifying the phase vocoder that is available for Max, I realised that for my understanding of the algorithm and Max itself it might be better to start and troubleshoot from scratch, to get a result that I could fully explain and modify as needed. To do so, I used my Python analysis and the available Github repository to break down the most important steps of the algorithm (to recap in overview terms: Fourier Transform > Windowing Function > Spectral Processing > Inverse Fourier Transform > Interpolation and Smoothing) in terms of understanding, but also mathematically so I would be able to send the signal through the correct processing chain in Max for the output I am looking for. This also required me to go back into my mathematical education a little bit in order to properly understand what I was working with.

Ultimately I aimed for 4 manually changeable parameters for now: Window size (to control spectral resolution), Overlap (to control overlap between windows), Stretch factor (the most important one) and a Smoothing parameter which is supposed to help create a smoother output with some to few artefacts.

For Future Reference

Another new consideration that came up during this process was that it might be useful to have a tuner of some sort integrated tob e able to tune the edited audio as needed for the current project. However, this is not a priority right now.

Ongoing

I am currently also trying to plan first listening experiences, to be able to test my prototype in the future. My supervisor suggested I look into webmushra to set up listening test scenarios, and another idea was to set up a sonified „Find the Mistake“ station at the exhibition so people could playfully get results for me to evaluate, in a less controlled context of course.   

Results and Reflection

The stage of the project I am in right now is not the most rewarding in that I don’t get any immediate results at the moment, as I am setting up and testing the patch based off my notes and the process I noted down fort he audio signal, but I know it is essential to create a sounding prototype and am hopeful that it will pay off. Either way, I have learned a lot about digital signal processing during my research for this phase of the project, which is always useful.

Objectives for Next Time

  • Get sound through the new signal chain
  • Come up with test scenarios and mockups
  • If I get that far: Try to get reloadable presets set up

“ontextC” – Technical Diary 3

What happened so far?

A recent priority was the comparison of different phase vocoders that are available in Max. With the help of the Cycling74 resources, I tested whether the difference between the modules using polar vs. cartesian coordinates affected my sound sources in a (noticeable) way that would make me choose one over the other – ultimately cartesian coordinates seemed like the better option for my project, also in terms of CPU usage. For windowing, the Hanning window is currently in use.

Furthermore, to better understand the processes the signal goes through within the plug-in, I asked my supervisor about the meaning of phase coherence in this context, and was able to bit by bit (little terminology reference here) connect the theory and the practical application, which will help me a lot going forward.

Ongoing

The evaluation and development of EQ, pitch shifting and reverb modules for my project is ongoing. Fortunately, there are a lot of libraries and resources especially for filtering and spatial effects, so the main challenge here is to find what works best to achieve the sound results I am aiming for, while also being functional and relatively simple to integrate. By studying existing Max patches, even though they might not be 100% what I am looking for, I am learning more not just about the Max environment, but also about best practices and how I could translate certain organisational aspects (comments are so helpful for external people looking at a patch to know what is going on!) and connections into my own project patch. My main resources for this are free patches that I download from the Max for Live library patch page and explore.

Results and Reflection

While it is good to know that there is a phase vocoder that can help me to realise my vision for this project, now it is time to start thinking about how to best integrate it, and define which modifications need to be made in order to make it sound the way I want it to in the context of my project. To do so, I will draw comparisons between PaulXStretch and the Max phase vocoders, to determine limitations, potential areas of improvement and differences in sound quality at different stretch factors.

Objectives for Next Time

  • Prepare and document sound examples to compare between the phase vocoder and PaulxStretch
  • Continue development of other modules