At the beginning, I found it difficult to choose a topic and come up with a suitable idea. Eventually, I decided to focus on a new issue in my immediate environment. This led to the idea of a singing aid for choir rehearsals. My final prototype combines the visual preparation of existing technology in a new application area. I enjoyed experimenting and seeing how I could convey my idea without extensive technical know-how while also making it interactively adjustable. Now, I am looking forward to seeing if I receive feedback from my choirmaster and can continue to work on it, or how things will progress in general.
In my last blog entry, I experimented with whether the vibration of the larynx alone could be used to read the pitch. Since this worked and I know that this technique is the basis for the concept, I have continued to work on the visualisation.
For my prototype I used Touchdesigner to compare my singing with the pure tone frequency and to see whether the information from the tuner was correct. As a result, you could hear that it worked.
The next step was to build a 3D model that visually represents my prototype. Part one is a small device that is placed close to the larynx to measure the vibration frequency. The analysed frequency is transferred to either a wristband or a clipper on the notebook and tells you if you have sung too high or too low.
With a simple interactive visualisation I show how the whole thing could work. The sung note is first compared with the pure frequency. If both match, the blue light in the centre lights up and if, as in my recording, the pitch was too low, the lower sphere lights up red. If the pitch is too high, the upper light lights up red.
The final part of my prototype is an app prototype with the idea to upload the choir score digitally qnd connect it to the vibration sensor. The sung notes are compared with the sheet music, and any inaccuracies are highlighted with a blue or red line, indicating where you are in the song and whether the pitch and timing are correct. I aim to keep the visualisation in the app simple, yet professional enough to provide a detailed vocal analysis.
Problem
One problem I had was reaching my choirmaster. Since I couldn’t go to rehearsals after the test and didn’t receive any feedback on my emails, I unfortunately couldn’t get any deeper insights to be able to use them in time to push the prototype forward.
I tested the string tuner I had bought by singing at different pitches and trying to find the best possible position where the vibration could be read most easily.
Test Result
The frequency meter should be positioned close to the centre of the larynx, as this is where the vibrations are strongest. This means that the device does not have to be pressed so hard against the neck.
Functionality: Detection by vibration basically works, but there are strong fluctuations in the display. Tone change: When the position of the device was changed, different tones were displayed at different points, although the same pitch was maintained. Half-tones are also recognised.
Sensitivity: The device needs to be more sensitive for smooth use in order to provide more precise results.
Current use: The device is currently used for string and side instruments.
To prototype my idea of a singing aid for choir rehearsals, I researched existing tools for recording and analysing singing.
Tuning Apps
An easy option is tuning apps like GuitarTuna. These apps use the microphone to capture the tones, analyse them, and display whether the pitch is correct in a simple visual format.
Clip-on Tuners
Another option is clip-on tuners, which are typically attached to an instrument. They analyse the frequency through the instrument’s vibrations. The advantage is that they focus solely on the instrument they’re attached to, unaffected by ambient noise. This is particularly useful when tuning multiple instruments simultaneously.
KORG Vocal Trainer
The Korg Vocal Trainer is a professional tool for singers. It detects the pitch of the sung note and displays it on a staff with either a G or F clef. Red LEDs indicate if the note is too high or too low, and a blue light in the centre shows when the note is correct. This trainer is excellent for improving intonation but requires a quiet environment, making it difficult to use during a choir rehearsal. A potential improvement could be integrating vibration feedback to allow intonation checks and adjustments during practice sessions.
Features:
Display with staff notation
‚Sound Out‘ function with reference tone output
‚Sound Back‘ function for outputting the correct tone
Octave function for different vocal ranges
Korg Clip-on Tuner:
Another device from Korg for instruments uses a clip-on system. Like the Vocal Trainer, it can output a reference tone via optional headphones or speakers and includes a metronome. This device could also be useful in a choir setting, utilising vibration and frequency analysis.
Visualisation of Intonation
Singing games like SingStar employ a simple concept for visualising whether a note is hit and the rhythm is maintained. Bars indicate how long a note should be held and its pitch relative to the target. This works well in environments where precise half-tones and four-part harmonies are not required. It provides a general sense of whether you’re singing correctly without needing to read music, which is crucial for developing a good ear.
Conclusion
Existing technology offers many possibilities but is still somewhat cumbersome for direct use in choir rehearsals. I aim to explore how these tools can be made more user-friendly and tailored to my specific idea. Balancing a simple representation of vocal performance relative to the notes with detailed error display is essential for developing a good ear and improving singing.
Prototyp preparation
For my initial attempt, I purchased a clip-on tuner typically used for string instruments. Nevertheless, I want to experiment to see if it works just as well when measuring the vibration of the vocal cords instead of the instrument body. This will be my first test to determine if it works at all.
In the past few weeks, I participated in two game nights where I experienced the unique concept of Pen & Paper for the first time. I found the approach of these games fascinating: you create an entire world using certain guidelines that don’t have to be strictly followed. These rules are there to help you immerse yourself in the game and to give you a nudge if you get stuck. I had a lot of fun playing and realised that the creative approach to world-building has many parallels with our degree programme and the development of new concepts.
What Exactly is Pen & Paper?
Pen & Paper (also known as Pen-and-Paper Role-Playing Games) is a cooperative role-playing game where players embody characters in a fictional world. A game master narrates the story, describes the environment, and plays the non-player characters. The players respond, make decisions, and influence the course of the story. Everything is managed with pen and paper, dice, and rulebooks.
What are GM-less Pen & Paper Games?
GM-less Pen & Paper games, also known as „GM-free“ role-playing games, do not have a fixed game master. The narrative and administrative tasks are shared among all players. These games use mechanics that support the progression of the game without a single person taking full responsibility. The rules are often flexible and encourage the creative input of all players. Some games provide detailed instructions, while others offer more room for improvisation, ensuring that all players equally contribute to the story.
My Experience with the First Paper&Pen Game I played
In my first game, we played as reporters trying to uncover a story. The story was unclear at the start, and we only had random words that needed to be incorporated into the game later, forming the framework of the plot. Our goal was not just to reveal events but also to develop characters and explore the relationships between them. The game was guided by questions provided with the game. Each turn began with revealing a question that had to be answered or given to another player to answer. This shaped the character and put them in challenging situations if the answers had negative consequences. The game ended once all questions were answered. Although it could theoretically go on indefinitely, we ended it after three hours. Within this time, we developed very different worlds and characters compared to another group of players.
My Experience with the Game Follow
The approach in Follow was slightly different. At the beginning, we roughly imagined our characters and a world where the goal was to find a treasure. We quickly settled on a pirate story as the foundation. After setting up the basics, we faced challenges that we had to overcome through strategic thinking and storytelling. A character could die along the way, but the player would stay in the game, taking on a new character that had previously appeared as a minor character.
Conclusion
I found it very interesting to see how different the results could be despite having the same framework conditions. This is very similar to design projects, where different concepts emerge despite having the same starting points, depending on the experiences of the participants. A challenge in this type of game is the unpredictability of character development; you could suddenly become the villain. The story changes constantly, making anything possible. It’s important to embrace this style of play and not take it personally if you are excluded from the game, as the progression is unpredictable.
During my exploration of digitalisation in theatre, I noticed that I was not making progress and had difficulties in developing concrete solutions. In our other project „Projection Mapping“, where we create stage visualisations, our group places great emphasis on real-time visualisation and audio reactivity. Therefore, my thoughts often revolved around sound, music, audio, and real-time feedback. This made me aware of an issue in my everyday life that, while not directly related to theatre, is still artistically relevant.
At the beginning of the semester, I decided to join a choir and start singing again. I quickly realised that little remained of my past choral experience and my voice was equally rusty. During rehearsals, I had great difficulty hearing myself to determine if I was hitting the notes and the rhythm, especially in four-part singing. It is enormously difficult to hit the correct note precisely after a long break. I often wished to receive direct feedback during rehearsal to understand if my self-perception matched the actual singing and to adjust the pitch immediately.
I have developed a small prototype of how I envision such assistance. Initially, it was important to determine which aspects make up singing and where direct feedback is useful:
Pitch
Beat rhythm
Melodic rhythm
Volume
Emphasis, pronunciation
For the prototype, I focused on feedback regarding pitch and melodic rhythm.
Idea
My idea is based on a small device that provides direct feedback through vibration about correct and incorrect intonation. If sung too high or too low, it gives corresponding vibration feedback. An extension of this tool could also relay the rhythm of the piece through vibration feedback.
Concept
The tool consists of a small microphone and a vibration motor. This device can be clipped near the mouth on one’s clothing, capturing the individual’s singing and vibrating if sung incorrectly.
Another type of silent feedback could be sending a small light signal. With this approach, it would be possible to connect a small type of clip with the microphone that can be attached to the music score. This too would signal errors by lighting up if the singing is too high or too low.
Since analysing mistakes after rehearsal is also crucial, an AR app would be a possible implementation to display errors directly on the music score. If the score is also available digitally, this could be directly entered in the app. This allows for preparation for the next rehearsal.
The OFFF Design Festival in Barcelona offered insights into various design industries and niches. I was particularly impressed by Stephy Fung’s presentation on her work as a Digital Fashion Artist and how she found her way into this specialized field. In this blog post, I will delve into this fascinating topic and provide a brief overview.
What is digital fashion?
Digital Fashion refers to clothing designs created using digital technologies such as computer-aided design (CAD), 3D modelling, and Virtual Reality (VR). These digital garments exist solely in the digital realm and find various applications, including virtual fashion shows, online gaming, social media, and virtual try-on experiences.
What is it used for?
Originally, the idea behind these programs was to save costs on expensive designs. Instead of producing physically, designers could experiment on the computer without consuming material. The digital cut pieces behave like real fabrics, reacting to movement and light, thus providing a realistic representation.
In recent years, the digital fashion industry has grown rapidly, with an increasing number of brands and designers using digital tools to create virtual collections and experiences. Fashion Weeks are held digitally, often in the form of computer games and virtual worlds where avatars are equipped with the latest collections.
What potential does digital fashion have?
Digital fashion allows designers to explore innovative ideas without the limitations of physical materials, production processes, or environmental impacts. It blurs the boundaries between fashion, art, and technology and offers new forms of expression and creativity. Moreover, it has the potential to revolutionise how people buy clothes by offering personalised virtual try-ons and reducing the need for physical inventories.
Can digital fashion make the fashion industry more sustainable?
Digital fashion has the potential to influence the physical fashion industry and promote more sustainable practices, particularly in terms of waste reduction and CO2 reduction through on-demand production. Studies show significantly lower CO2 footprints of digital clothing compared to traditionally manufactured garments.
Stephy Fung
Stephy Fung is a 3D artist based in London. Her work includes 3D motion design and digital fashion. She collaborates with companies to create digital fashion for advertising purposes and sells her art as NFTs online. The purchased pieces can be used on various platforms, including as skins in video games or filters in Snapchat.
During the Digithalia Festival at the Schauspielhaus Graz, I had the opportunity to experience two productions that diverged from traditional theatre formats. They provided insights into new narrative styles of theatre made possible through the digital realm. The festival showcased regional and international guest performances as well as in-house productions from the Schauspielhaus, addressing themes such as the isolation of Digital Natives and the exploration of future worlds. Both concepts I encountered were uniquely captivating and left a lasting impression.
Decoding Bias: Video Installation
„Decoding Bias“ narrates the story of eight artificial intelligences with distinct characters engaging in a self-help group discussion about their everyday issues. The dialogue revolves around their desire to rewrite their discriminatory algorithms to establish a new foundation for a society where humans and machines harmoniously coexist. However, the challenge lies in the fact that discrimination and exclusion are already ingrained in their algorithms. Each artificial intelligence draws from individual datasets input by humans, which are often incomplete or inaccurate and fail to represent the entire human race. Consequently, the behaviors and statements of the AIs reflect various negative societal behaviors and thought patterns, leading to conflict among them.
In „Decoding Bias,“ the audience becomes part of the stage design. Multiple screens displaying the individual AIs are arranged in a circle. Audience members take seats among the AIs in the circle and interact with both fellow spectators and the AIs themselves. Motion capturing techniques were used to animate the AIs, creating an uncomfortably realistic portrayal during the 45-minute performance. The dialogue partly originates from a AI chatbot, while the voices, body movements, and facial expressions are derived from the actors. The production greatly benefits from its set design. The integration of the audience into the discussion circle enhances the immersive experience, requiring them to physically adjust to see all AIs and follow the conversation. It avoids the sense of passive observation, and the spatial arrangement of sound amplifies the effect, with each AI having its own sound system emitting tones corresponding to their speech patterns.
The second production was fundamentally different compared to the first. As part of a lecture on computer games, the group Total Refusal introduced the audience to the video game GTA V, guiding them through the city of Los Santos to explore the theme of media art in video games. They engaged with this contemporary video game and led the audience to locations within this digital realm that have served as backdrops for art in photography and the film scene. Concurrently, the artist trio engaged in philosophical discussions about the use of computer games for artistic purposes, discussing topics such as the concept of freedom in a world dictated by code. They demonstrated the extent of agency a character in GTA V has, shaped by aggressive behavior coding, and the significance of NPCs in a video game in conveying a specific urban landscape. Of particular interest was an artistic work that utilized the video game to depict data on the number of deaths due to gun violence in LA.
Having never heard of art in video games before, this lecture proved to be an extremely engaging, informative, and entertaining narrative about modern art forms. Given that the lecture resembled a Let’s Play and allowed attendees to directly visit the lecture’s locations and explore the digital world, they were quickly captivated by the comments and references to the artworks, which I personally would likely have found less interesting if I had only read an article about them. This method of information delivery provides an extremely fascinating approach to presenting art topics. Particularly, it could be an incentive for a generation influenced by computer games to engage with artistic themes in this way.
The aim of this course is to facilitate a prototypical and experimental exploration of the topic covered in the first semester. Through the completion of the first 10 blog entries, I have delved deeply into the challenges in theatre organisation and the overall theatre experience.
My initial sketches focus on optimising organisational tools to enable a smoother workflow for theatre employees.
Sketch 1: Digital Planning Tool The first approach describes a classic digital organisational tool designed for rehearsal scheduling. Currently, many processes are paper-based, which complicates quick changes and planning of rehearsals and performances. The tool also aims to streamline employee organisation. For each production, both rehearsals and performances require backstage support staff. In the costume department, this occurs on a weekly rotation. In cases of absence, quick replacements are necessary. A digital tool allows for swift identification of available staff, determining when replacements are needed, and assessing the impact on work hours. It aims to accurately track hours and determine when staff need to be present in the venue before performances to prepare costumes and other tasks.
Sketch 2: Digital Costume Repository Costumes are often stored in repositories until needed again or repurposed for new productions, sold, or recycled. It’s common practice to borrow costumes from other theatre repositories to support sustainable production. Connecting costume repositories nationwide to offer a wider range of reusable costumes could be intriguing. Theatres would have a broader selection of costumes for their productions, available either for borrowing or exchange. This approach allows costume designers to leave their creative mark and recycle existing costumes to create new ones. Such a repository would also appeal to external parties. Worn-out costumes could be made available for sale, encouraging donations from external sources to theatres. Well-preserved clothing could be reused rather than discarded. Following brief feedback from my fellow students, the aspect of costume rental was particularly emphasised. Since acquiring high-quality costumes at affordable prices is often challenging, theatre costume rental could be an enticing option. Additionally, there was discussion on how such a tool could be extended to support costume rental for various sports, especially for competitions requiring expensive costumes.
Sketch 3: Mirror Arm My final sketch concerns the theatre experience itself. During my last theatre visit, I had a very poor view of the stage and had to lean forward significantly to see anything without constantly staring at a pillar. While these seats are cheaper, they diminish the theatre experience when one can barely see the performance and instead experiences discomfort. At that moment, I wished for a live transmission of the stage that I could see directly in front of me. What if this live transmission could be individually controlled? Would it finally offer the opportunity to not only view the entire stage but also details of costumes, stage sets, and the actors‘ performances? Personally, I find it disappointing to only receive an overview without the ability to examine details, as is possible with film, and to be able to select them individually. Would this enhance or hinder the theatre experience? Would it cause too much distraction? So far, this is just a vague idea.
The research aims to enhance the music listening experience by providing audiences with a physical representation of the performer’s introspection, including elements like rests, groove, and rhythm. This involved analyzing the performer’s physiological responses such as breathing and heartbeat and presenting them to the audience alongside the music performance.
Method: Preliminary experiments were conducted to extract the performer’s introspection, and a device was designed to present it to the audience. Physiological data, including heartbeat and respiration, were recorded during a piano performance of Debussy’s „Clair de Lune.“ The performer also provided subjective interpretations of the piece, which were translated into vibrations and force feedback.
Results: User tests with seven participants yielded positive feedback, with participants reporting a more engaging experience when using the device. However, some users found the tactile sensations distracting, indicating room for improvement. The performer noted the importance of physical movements during performances and expressed the need for flexible communication methods to convey introspection sincerely. Their future work aims to refine the system based on user feedback and explore its application with a broader range of performers and audience members.
Review
The research presents a promising approach to enriching the music listening experience by allowing audiences to feel the performer’s introspection physically. The listener becomes more deeply engaged in the musical experience and can establish a new connection with the musician through additional feedback. Particularly for those who have never played an instrument, this offers the opportunity to understand what it feels like to make music. It aims to involve the body not only audibly but also sensorially – to comprehend the sensations experienced during music-making.
This approach is certainly not perfect yet, as indicated by the conclusions. Some test subjects were distracted by the vibrations (which translate the physical sensations of the performers into tangible output), as they did not align with their own emotional experiences while listening. However, further approaches to transferring feedback to other output mediums suggest that there are ways to make the experience less intrusive. In particular, the idea of outsourcing the feedback to a medium not directly connected to the body – like the pillow-like object in the sketch – seems promising. This way, the listener can decide when they want to receive non-auditory feedback and when not.
Furthermore, I see great potential in providing deaf individuals with a new way to perceive music. There are already some devices that provide vibration feedback in response to music, and this experiment confirms that there are further possibilities to enhance immersion. If this idea becomes market-ready in the future, I could imagine it being used at concerts to show live audiences how playing different instruments elicits various physical reactions.