„Body and Violin Fusion“ – Latest Compositional Concept IX

The piece is based on both the played and recorded materials. While it holds an overall concept, it is not a traditional written score but rather, it depends heavily on and is closely connected to the processed sounds and the programming aspect. The core idea of the piece revolves around the transition from the acoustic sound of the violin to processed and electronic sounds. This transition reflects with my own musical journey, from a classical violinist to an electroacoustic musician. Although the piece is not fully improvised, it still allows for a sense of freedom, enabling the performer to interact with the processed sounds, which vary each time during the performance. The structure of the piece is sectional, and with each step, it shifts further into the electronic domain. The starting point of that is a loop of each buffer, where the sounds are heard not being played in real-time by the performer.

Pieces like Suspensions by Atau Tanaka[1] and Weapon of Choice by Alexander Schubert[2] and also the book of Marije Baalman Composing Interactions[3], played a significant role in shaping the artistic direction of this set up. They helped me establish a connection between the technical and artistic aspects of the it, and to blend improvisation with electronic manipulation in a meaningful way.

My intention was to unify the entire piece, where in addition to the processed sound, the performer also plays live. This way, the piece does not entirely become electronic, instead it creates a polyphonic sound where different materials blend into each other. I also aimed to incorporate extended techniques on the violin, such as bowing on the body of the instrument, to capture the texture of the wood’s sound, among others. These techniques create variations with each performance attempt.

Since there is no fixed score for the piece, the timing is inherently variable. It depends not only on the recorded materials but also on the length and nature of the interactions between the performer and the electronic sounds. The performer’s engagement with the processed sounds can fluctuate, leading to different pacing and moments of intensity. Although that there is more or less clear that the most intense or the chaotic part is the moment that granular patches arise.

Towards the end of the piece I considered two possible approaches, both of which could be easily implemented within the patch. The first scenario involved abruptly cutting off the sound while the piece remained in its chaotic phase, with the violin accompanying this sudden act. The second scenario entailed first progressively increasing in intensity and then gradually fading out, so this version has a more gradual transition. These two variations could significantly alter the conceptual framework of the piece as well, either aligning with my intention to conclude with a sense of resolution or opening the door to further exploration and discovery. For now, I have chosen to conclude the piece by gradually reducing its dynamic intensity and stabilizing the sound. However this decision is not necessarily final, as the compositional process remains open to further refinement.

„Body and Violin Fusion“ – Wekinator VIII

As a result, the data values became irregular and began with a range of negative values. The attempt to provide more history to the All continues type was unsuccessful, as there were even more fluctuations in the outputs, and I couldn’t really relate to these outputs in terms of using them for an interaction.

So in conclusion I primarily used real sensor data with some scaling and processing, as the changes were smoother. This approach was suitable because, for the parts of the composition that required continuous data control, no complex gestures were involved. So I have used simple x, y, and z values, which provided sufficient accuracy and responsiveness.

Another approach was to remove the 3x rotation and total acceleration from the sensor data during the training phase to simplify the inputs for Wekinator. This was done to ensure that only the necessary data was provided, potentially making it easier for Wekinator to build a model and would result in a more efficient training. This plan proved successful, as it led to clearer outputs, which for now I have continued with this setup, using only the 3x acceleration data as the input for training Wekinator models.

The last effort in providing better data input to Wekinator was firstly to send constant data values by using the [metro] object to repeatedly trigger sensor readings at a fixed interval. If the system stops receiving data when the sensor is not moving, even for a few milliseconds, it might interpret this as a loss of connection or a gap in data, potentially leading to misinterpretations. Secondly, I tried recording some examples in Wekinator without moving my hand (just keeping it still and then pressing the record button) while maintaining a position aligned with the initial movement. I also tried to record values that were not too low in terms of speed because, as seen in the data display, low values are mostly noise and not very useful as the higher acceleration values which have a greater impact. In practice, there was a slight improvement in sensor functionality, though not significantly noticeable. But I decided to stick with this configuration, as theoretically it ensures a more stable and reliable data flow, although there should be a balance for recording more aggressive and faster movements, it also needs to align with the tempo and the overall aesthetic concept of the performance.

It is also worth mentioning that there is the possibility for experimentation with the settings of the Wekinator itself and changing the routing matrix, such as the input/output connection editor, which helped me in the early stages but not in the final one. And also WekiInputHelper, a software component that helps manage data from a feature extractor (the part that collects data), sits between Max MSP and Wekinator. It has features such as calculating the minimum, maximum, average, or standard deviation of data over time, collecting and storing data in a memory buffer, calculating changes in data (like the difference from the previous value), performing custom math, applying filters, and controlling how often data is sent or sending data only when certain conditions are met.

„Body and Violin Fusion“ – Wekinator VII

After establishing the connection between the two software programs, an effort was made to understand Wekinator’s three output types in relation to the composition concept and their application within the patches.

All classifiers or classification outputs represent distinct categories such as Position 1, Position 2 and Position 3. It is necessary to specify to Wekinator how many categories to use. Wekinator outputs numbers, such as 1, 2 or 3, corresponding to categories 1, 2, 3. It attempts to categorize each new input provided.

All continuous outputs generate numeric values within a defined range, divided into two types: Real-valued, for example to control smooth changes like any sliders, and Integer-valued, to adjust parameters such as filter cutoff frequency with an integer output.

The third type in Wekinator is Dynamic Time Warping (DTW), which is used to recognize and compare more complex patterns over time. Wekinator sends different output messages for different output types, for instance when it builds numeric or classification models, it computes a set of output values every time it sees a new input, but when Wekinator builds a dynamic time warping model, it continually looks at the input (despite the speed and the duration of them) to see how closely the current shape of the input, matches each of the example shapes or the trained patterns, this means that the random movements will provide no inputs from the Wekinator.

Figure 1. An overview of Wekinator Types in the Software

In my initial attempts, I tried to record multiple examples for each motion and map the DTW to turn on/off different gates and trigger some selected parameters once during the piece. However, after numerous trials it became clear to me that the absolute value of the DTW is not crucial and cannot be effectively mapped to so many distinct parts. As a result, I decided to use an unlatching or momentary switch pedal for this purpose instead.

Later, I decided to utilize DTW for granular synthesis and the chorus section. I assigned different motion patterns to trigger various parts of these effects, ensuring that any misreadings or constant values would not negatively impact the piece. This approach prevents the possibility of silence, as multiple triggers occur in succession based on different movements. To make the process more optimal, I attempted to convert the float output values of the DTW, which typically range from around 3.0 to 13.0, into a single integer state. From the resulting three integer output data streams, I selected the highest or winner value, as it represents the most probable outcome. Additionally, I implemented a timeout mechanism using a gate with a 5-millisecond delay for the on/off cycle. This ensures that the selected winner motion remains active for a short duration, helping to stabilize the output and prevent rapid fluctuations.

As the distinct categories in the classification did not contribute effectively to the compositional process, I decided not to use them for this project. Instead, I focused on working with continuous outputs to manipulate various sections, such as reverb and pitch shift during the piece. But since the outputs were not very smooth, I assumed that the lack of historical data in this type of Wekinator might be the reason, so I considered that providing more past values could lead to more consistent results.

To address this, I mapped the sensors to the DTW and then used 10 data values (7 directly from the sensors and 3 from the DTW outputs) to train another Wekinator for continuous control. Additionally, different port and message names were required for the input and output so that Wekinator could distinguish them from the DTW data.

Figure 2. Max MSP Data Exchange and Configuration For DTW and All Continues Data Types

„Body and Violin Fusion“ – Wekinator VI

After deciding on the types of interactions in the programming part, I realized the need to use machine learning techniques to map more complex gestures and analyze and compare them. I was looking for an external library that could help me with that so I could integrate all parts of the work into a single software. I came across ml.lib7, developed by Ali Momeni and Jamie Bullock for Max MSP and Pure Data, which is primarily based on the Gesture Recognition Toolkit8 by Nick Gillian.

Unfortunately, none of the objects from the package were able to load in Max MSP, and I encountered an error indicating that the object bundle executable could not be loaded. I also discovered that the creators had discontinued support and debugging for the library. However, it appears that the library still works on windows, both in Max MSP and Pure Data and on macOS, only for Pure Data.

Since I had developed all the patches and processing parts in Max MSP on macOS, I decided to work with Wekinator, an open-source machine learning software created by Rebecca Fiebrink, which sends and receives data via OSC. In the early stages, I tried to [pack] all the sensor datas (3x rotations, 3x accelerations and 1x total acceleration) and send/receive them to/from Wekinator via the [udpsend] and [udpreceive] objects.

One important consideration, which is basic but necessary, is to use the same port number for the inputs. If running everything on the same computer, the localhost address is by default 6448. Another key point is that the message name used to send inputs from Max should match with the one in Wekinator e.g., /wek/inputs. The same considerations apply when receiving outputs from Wekinator. Another important factor is that Wekinator needs to know the number of inputs and outputs to properly configure the machine learning model. At this stage, I set it to 7 inputs and chose to receive 3 outputs from Wekinator.

Figure 1. Real-Time Data Exchange and Configuration Between Max MSP and Wekinator

„Body and Violin Fusion“ – Programming V

Additionally, there are other abstractions that I found necessary during the practice phase. For instance, using the [jit.grab] object to digitize video from an external source like the laptop’s front-facing camera to observe my hand movements.

At the end I used a feature found in the Extras menu of Max MSP to record and play back the Max output, as well as another buffer to record the acoustic sound of the violin, for further synchronization and mixing.

Some parts of the patch were placed in abstractions to make the patch clearer and easier to follow for the violinist, as well as to make it more accessible in different sections. This will require opening multiple windows on the screen based on the performer’s preference. Nevertheless, a presentation mode of the main patch can also be considered, offering a simplified, performance-oriented interface that allows the violinist to focus on essential controls and visual elements without unnecessary distractions.

It is also worth mentioning that the function of the pedals (for 8 interactions) for the first 5 parts is to turn each one on/off, meaning the pedal needs to be pressed twice for each. However, for the last 3 parts, only one press is required. A counter number is included in this section to display the current interaction number, helping to prevent confusion while pressing the pedal.

Microsoft Word – Exposé III.docx

Figure 1. An overview of pedals functions and interactions in Max MSP

„Body and Violin Fusion“ – Programming IV

The programming strategy was to begin by recording the violin to gather materials for further processing during the piece. I used four buffers (with applied Hanning windows to smooth the edges of the signal), recording into them sequentially for later looping. The buffers will be triggered via a pedal, which activated each buffer one after the other using a counter.

After recording into the four buffers, the gate for pitch shifting of one or two buffers would open, as they contain more low-frequency content, making the pitch shift more noticeable. The pitch shift was controlled in real-time using sensor data, specifically the Y-axis rotation parameter.

After exploring pitch shifting while playing the violin, the next gate will gradually increase the reverb gain over 10 seconds, rising from -70 dB to -5 dB. The reverb parameters (size, decay time, high-frequency damping and diffusion) are controlled by real sensor data, including the Y-axis rotation. The core concept of the reverb patch is inspired by the [yafr 2] as a plate reverb by Randy Jones, in the style of Griesinger, and is part of the Max MSP library.

Next, I applied another gain adjustment using the same approach over 20 seconds to gradually introduce the chorus and granular sections. For this part, I primarily used DTW data from Wekinator to switch between different granular synthesis patches, while real sensor data controlled the chorus via the X-axis rotation parameter. The setup includes six granular synthesis patches, triggered at varying tempos. Three of these patches feature randomized start/stop (grain positions) and duration settings, creating diverse densities and sizes of the grain with or without pitch shifting and reverse effects. The remaining three granular patches have their parameters controlled by the Y-axis rotation sensor. In this section, the resulting sound creates harmony across different frequency ranges.

Evaluation of a Master’s Thesis

Title: Performativity and Liveness: Approaches to Performance and Composition in Electroacoustic Music
Author: Leonie Patrizia Strecker
Institution: University of Music and Performing Arts, Graz
Date: September 2023
Field: Computer Music
Degree Name: Master of Arts (MA)

Artistic Quality:
The thesis tackles challenging concepts like performativity and spatial interaction, which Strecker tries to weave into her artistic work. However, the artistic interpretation feels limited to her personal approach, which could be enhanced by looking at other artists’ work for comparison. Examining how different genres approach these themes could offer valuable context and strengthen her own interpretations.

Degree of Innovation:
Strecker’s work has moments of originality, especially in how she applies theoretical ideas to her performances. Yet, the approach stops short of exploring newer methods or current technologies that could deepen the study of “liveness.” For instance, exploring real-time audience interaction or using AI-driven components might take the concept of liveness in a direction that resonates more with today’s audiences. The potential for further innovation is present but could be expanded.

Independence:
The thesis shows independence, especially in how Strecker applies theory to her compositions. However, a more questioning approach to her chosen ideas could have added depth. For example, considering viewpoints that challenge or add complexity to the idea of performativity in electroacoustic music would show a willingness to explore different angles, which could make her conclusions stronger.

Organization and Structure:
The thesis is clearly organized, but sometimes feels compartmentalized. The theoretical and practical sections could be more smoothly connected, creating a flow that lets theory and application reinforce each other. This reorganization might make the work feel more engaging for the reader.

Clarity and Communication:
Strecker’s writing is mostly clear, though some sections are quite dense with academic language, which could be challenging for readers unfamiliar with performance theory. Simplifying some of the complex ideas, or using more everyday examples, would make the thesis accessible to a wider audience. Terms like “corporeal liveness” and “mediatized presence” could especially benefit from being explained in simpler terms.

Scope of the Work:
The thesis stays focused, allowing for a deep dive into its topics, but this focus might be a bit too narrow. Relying mainly on her own compositions restricts the broader applicability of her findings. Looking at other artists’ work, or contrasting her pieces with notable electroacoustic compositions, might offer a more well-rounded perspective. Including recent advances in music technology, like machine learning in live performances, could also give the thesis a more modern edge.

Orthography, Diligence, and Accuracy:
The thesis is carefully proofed, though there are minor inconsistencies in digital citations. Additionally, bringing in newer scholarship, especially recent studies on live electronic performance, would make the arguments feel timely and better connected to current discussions in the field.

Literature:
Strecker has done well covering essential sources, referencing foundational theorists like Erika Fischer-Lichte and Philip Auslander. However, more secondary sources, particularly from fields like media studies or digital performance, could help add new angles to her analysis. Expanding the literature review in this way could show how her work fits into a broader conversation.

„Body and Violin Fusion“ – Audio Programming & Mapping Process III

So the audio programming process basically took the entire semester, and it continues to be a focus as I refine it to better align with the compositional idea and ensure the patches are as practical as possible.

I started by mapping the sensors in Max MSP. Each sensor can be mapped to seven parameters using MIDI numbers, the 7th parameter is a measure of total acceleration obtained directly from the SOMI-1 receiver. It’s calculated using a Pythagorean formula based on acceleration data, excluding gravitational force to improve accuracy.

Figure I. Movement parameters of SOMI-1 around the X, Y, Z axis

What I learned from the mapping process is that even though I tried to be specific with the movements, generally you cannot isolate just one of these parameters while moving. This is a key difference I need to consider when mapping movement sensors compared to other stable

MIDI controllers. To effectively map the sensors and keep in mind the application of movements, I divided the motions into the 7 parameters for each sensor:

  • Rotation X
  • Rotation Y
  • Rotation Z
  • Acceleration X
  • Acceleration Y
  • Acceleration Z
  • Total Acceleration

   Figure II. An overview of sensor mapping in Max MSP

After completing the initial movement mapping, I began creating patches for the interaction part using the aforementioned list. This process is ongoing and will likely continue until the project’s completion. Meanwhile, a crucial aspect I am keen to focus on this summer is implementing patches to the sensors and testing their compatibility both independently and with the violin.

Throughout my learning, I am aware that due to the violin, my movement mapping is limited, and I am still trying to figure out how I can wear the sensors either on my hands or elsewhere, but it is also clear that the majority of the mapping is for the right hand or for the bowing part. However, there is also the possibility to map the sensors not only to the usual gestures that occur while playing the violin but also to some unconventional movements to trigger certain parameters, which would require more precise mapping in this case. In general, there are a few possibilities for the mapping process that I need to consider and examine thoroughly.

There are several types of mapping strategies that can be employed, regardless of whether the relationship between the control input and the parameter output is linear or non-linear:

One-to-One Mapping: This is the simplest form of mapping where each control or sensor input directly corresponds to a specific parameter or output

Multi-Parameter Mapping: This approach allows a single sensor input to influence several aspects of the sound to control multiple parameters simultaneously or sequentially. There is also this possibility to change the works of the sensors via pedal to have the combination of different tasks for the sensors

What I also have in mind is to avoid counter-intuitive mapping, which involves controlling parameters in unexpected ways, as it adds an element of unpredictability to the performance. I believe this is unnecessary for the project. Instead, my goal for the mapping part is to ensure that my movements have a clear and meaningful relationship with the parameters they control.

„Body and Violin Fusion“ – Types of Interactions II

After finalizing the compositional idea, I attempted to compile a list of all the signal processing types and audio effects that I intend to use in my piece. To practice with fixed elements, the aim was to control the parameters rather than randomize them, so that I can bring the project closer to what I expected and make it executable as well. In other words, I determined which aspects should be under my control and which should not.

Throughout the semester, I experimented with these different types of interactions to explore my sound possibilities and predict how they would affect the sounds of the violin.

  • Multiple Loops (evolving musical patterns) On/Off through monitor?
    On/Off via Sensors? It needs more precise and complicated patches, and the patches should track unnormal gesture and not the way I usually play with violin
  • Audio Signal Processing Pitch shifting
    Time stretching Amplitude/Dynamic Reverb/Delay Timbre
  • Microsoft Word – Exposé II.docx
  • Chorus: to create a thicker sound by duplicating the signal and slightly varying the pitch and timing
  • Sound Synthesis Variations Additive synthesis
    FM (frequency modulation) synthesis Granular synthesis
  • Granular Synthesis
    Grain size:
    shorter grains noisier and more textural, longer grains are more faithful to the recorded sounds of the buffer Grain Density: higher density sounds thicker and more continuous, lower density scattered texture and more noticeable individual grains Grain shape (windowing function): the envelope applied to each grain to shape its amplitude over time, currently I am using the Hamming window Grain position (start time): the start pointing of each grain within the original audio sample Grain playback direction: forward or backward!
    Grain spatialization: just have an idea that the grains move around the listeners from everywhere like a rain!
    Grain sequencing: different order of the grains’ playback for more chaotic textures Randomize the parameters: Which is not my goal, but it’s also another possibility!
  • Spatialization The goal is to begin with headphones that track movements and initially use a binaural mix. Additionally, I plan to explore spatialization, utilizing various IEM plugins, including those designed for granular purposes.

„Body and Violin Fusion“ – Compositional Aspect I

As I started with audio programming, it felt like I walked into an unknown world with many possibilities, and it wasn’t clear to me which direction I should take. I came across a conference proceeding on New Interfaces for Musical Expression by Perry Cook15, which helped clarify my initial steps. In the conference paper mentioned that, “Make a piece, not an instrument or controller.” So, I decided to come up with the idea of a piece or specific musical composition because I didn’t want to create interesting research questions without a real product or future direction.
During the semester, I came up with two compositional ideas, and I aim to work on both of them since the interaction parts between them are not significantly different.

Piece No. I
The first concept comes from my own experience as a classical violinist who transitioned into electronic music. The overall idea involves contrasting aspects of my experience, demonstrating how every action reflects on other elements, and showcasing the complexity of identity. This piece has three sections:

Section I
1. Introduction
2. Then it goes to looping and layering
3. Atonality
4. Dissonant Intervals
5. Full of aggressive bowing
6. Using violin extended techniques
• For the transition part from section I to section II, I will either put the violin aside or have it in my hands but not playing it, instead I will just control the recorded sound and manipulate it with the sensors.
• Then will control the amplitude and do a smooth decrescendo to be prepared for the second section.

Section II
1. After some choreography, build the contrast on top of distorted and fading out loop
2. Play tonal patterns
3. With clear sentences
4. Subtle movements
5. Everything in this section is about contrast
6. Probably no loops at all!

Section III
1. Back to the chaos but this time overlapping and layering consonant intervals and melodies
2. End the piece immediately!
So the concept for this piece is mostly based on multiple loops, evolving patterns, and layering them to create a some sort of polyphony. The goal is to alter these layers through various types of interaction to showcase the role of the sensors. I will also build a contrast between the manipulated sounds and the raw sound of the violin, primarily in section two, and then take ideas from each of the two sections to build the final section based on that, for the conclusion part.

Piece No. II
The second piece also evolves from the first one, but with a difference: we start with the violin just for the beginning, and then the focus is more on the transition from acoustic to electronic sounds.
1. Playing violin melodically and record it into the buffer and create loops
2. Put the violin aside
3. Wear the sensors
4. Start with different types of interactions step by step
5. The end consists of many synth loops blending into each other