14//International Design Week: Connecting Packaging to Digital

During the International Design Week workshop, titled „But Seriously… Following the Call of Beauty… or Celebrating the CounterPARTY?“, we began by exploring the possibilities within graphic design.

As a group of students, we identified various daily life problems that we, as designers, could address. Each of us presented a specific problem, and we decided to focus on the issue of recycling information. I brought in a leaflet from my student dorm that was overwhelming and full of information, highlighting a common issue in student dorms and public spaces. While local residents are typically educated about proper waste disposal, newcomers often find it challenging to learn and follow new guidelines.

We split into two groups to brainstorm solutions. One immediate idea was to design a better flyer to make the recycling process more understandable. However, we soon expanded our thinking beyond traditional graphic design.

We explored various innovative solutions, such as implementing a color-coding system for garbage bags or using colored stickers on packaging to simplify sorting. Ultimately, we decided to focus on integrating packaging with a digital scanning app. This app would connect physical materials with digital information, helping to resolve recycling issues more effectively.

The app would allow users to scan a code on the packaging and receive rewards, thus gamifying the recycling process. This approach not only makes recycling easier but also incentivizes participation, making the process more engaging and rewarding.

While we mainly brainstormed solutions during the workshop without having the time to prototype, the experience sparked important reflections. I began to consider how digital solutions, when implemented correctly, could address real-life problems thanks to unique qualities such as interactivity, gamification, and audio/video integration. These digital features have the potential to create engaging and effective tools that can significantly enhance our daily lives.

Vocal Vibrations: A Multisensory Experience of the Voice

I chose the article ‚Vocal Vibrations: A Multisensory Experience of the Voice‘ because I was doing my design&research on multi-sensory experiences in my first semester. The project’s emphasis on enabling participants to uniquely and interactively explore their voices through sound and touch aligns closely with my interest in creating immersive sensory experiences that bridge the gap between technology and human expression.

Vocal Vibrations Installation

One of the most fascinating aspects of the article is the introduction of the ORB device, which converts vocal input into tactile vibrations, providing participants with a tangible and engaging way to connect with their voices. This innovative technology not only improves understanding of vocal production, but also provides a new platform for individuals to explore the nuances of vocal sounds in a hands-on way. For example, ORB allows users to feel the physical vibrations of their voice in their hands, creating a direct and intimate connection to the act of vocalization.

The ORB

Additionally, the positive response of participants, with 83% reporting feeling calm, exploratory, and deeply connected to the musical experience, highlights the installation’s profound impact on personal well-being and self-expression. Comments such as „Reminds me of when I was a little girl, sitting on my mother’s lap with my ear against her chest“ and „I want one of these ORBs to help me relax at work or before bed“ underscore the emotionality. The resonance and transformative potential of the Vocal Vibrations project. By encouraging participants to pay attention to the range of vocal sounds and vibrations, the installation encourages rich musical experiences that focus on the individual voice.

In conclusion, „Vocal Vibrations“ not only demonstrates the power of technology to enhance personal well-being and creative expression, but also lays a strong foundation for future research and applications in the field of expressive interfaces and interactive installations. Combining art, technology and human emotion, the project offers a compelling glimpse into the possibilities of using sound and touch to deepen our connection with our voices and explore new dimensions of self-expression.

Catch Your Breath: A Musical Approach to Improving CT Imaging and Relaxation

„Catch Your Breath“ is a novel interactive audiovisual biofeedback system developed at Stanford University’s Center for Computer Research in Music and Acoustics (CCRMA). Initially designed to assist patients during 4D computed tomography (CT) scans, this system uses musical cues to help regulate breathing, thereby reducing motion-induced image distortion.

Bridging Music with Medical Technology
This system originated from a collaboration with Stanford’s Department of Radiation Oncology. It aims to mitigate the imaging artifacts in lung cancer diagnostics caused by irregular breathing patterns. By tracking the patient’s breathing through a webcam and a reflective cube, the system synchronizes a musical accompaniment to the patient’s breath cycle. The music adjusts in real time, encouraging patients to breathe steadily to harmonize the melody with the accompaniment. This method has shown potential in increasing the accuracy of 4D-CT scans.

From Medical Equipment to Museum Installation
Adapting the technology for public interaction, the project evolved into a museum installation. Here, visitors wear a pendant embedded with an IR sensor that tracks their breathing. The system adjusts the tempo of a stored musical piece—selected from classical and Baroque repertoire—according to the wearer’s breathing pattern. This setup not only makes the installation accessible but also engaging, allowing users to control the music with their breath, promoting relaxation and slow, regular breathing patterns.

Technical Insights and Future Directions
The hardware of „Catch Your Breath“ includes an inexpensive webcam modified with a slide film as an ambient light filter, highlighting the simplicity and accessibility of the technology. The software, developed in Max/MSP with Jitter, calculates the breathing cycle, adjusting the music’s tempo in real-time through MIDI sequencers.
As the project moves forward, the team at CCRMA plans to refine the system’s responsiveness and explore its potential on portable devices, making relaxation and therapeutic techniques more accessible to a broader audience.

Conclusion:
„Catch Your Breath“ is more than a technical innovation; it’s a bridge between art and science, making complex technology accessible and useful in everyday settings. This project not only exemplifies interdisciplinary collaboration but also illustrates how creative applications of technology can have broad and meaningful impacts, from medical diagnostics to public installations that enhance well-being.

13. Erste Schritte

Detaillierte Einblicke in die ersten sechs Steps

In diesem Blogbeitrag wird der Fortschritt des Semesterprojekts „Motion Design mit After Effects“ ausführlich dargelegt. Die ersten sechs Steps, die erfolgreich gemeistert wurden, werden detailliert beschrieben und ein Ausblick auf die nächsten Schritte gegeben.

Step 1: Benutzeroberfläche und Transformationseigenschaften

  • Umfassende Einarbeitung in die Benutzeroberfläche von After Effects: Nach einem Überblick über die Arbeitsumgebung wurden die wichtigsten Werkzeuge und Funktionen wie die Erstellung von Projekten, die Verwaltung von Composings und die effektive Nutzung der Zeitleiste im Detail erlernt.
  • Vertiefung in die Transformationseigenschaften von Layern: Diese Eigenschaften ermöglichen die präzise Positionierung, Skalierung, Rotierung und Animation von Layern, wodurch eine flexible und zielgerichtete Gestaltung möglich wird.

Step 2: Shape-Layer und Masken

  • Erstellung und Bearbeitung von Formen mit Shape-Layern: Die vielseitigen Shape-Layer bieten vielfältige Möglichkeiten zur Erstellung von Vektorformen, die beliebig angepasst und animiert werden können.
  • Präzise Selektion von Bereichen mit Hilfe von Masken: Masken ermöglichen die gezielte Auswahl von bestimmten Bereichen innerhalb von Layern, um diese hervorzuheben, freizustellen oder mit Effekten zu versehen.

Step 3: Shape-Modifikatoren

  • Anwendung von Shape-Modifikatoren zur Veränderung von Formen: Shape-Modifikatoren erweitern die Gestaltungsmöglichkeiten von Shape-Layern erheblich. Sie ermöglichen die Verzerrung, Verrundung, Wölbung und weitere Transformationen von Formen, wodurch einzigartige visuelle Effekte erzielt werden können.

Step 4: Easings

  • Anwendung verschiedener Easing-Funktionen zur Steuerung von Bewegungsabläufen: Easing-Funktionen spielen eine entscheidende Rolle für die Qualität und Ausdruckskraft von Animationen. Durch die gezielte Steuerung der Beschleunigung und Verlangsamung von Bewegungen können dynamische und ästhetisch ansprechende Animationen erstellt werden.

Step 5: Effekte & Stylisierung Teil 1

  • Anwendung grundlegender Effekte wie Glühen, Schatten, Verzerrung und Farbkorrektur auf Layer: Effekte ermöglichen die kreative Gestaltung von Layern und die Schaffung einzigartiger visueller Looks und Stimmungen.
  • Unterstreichung der Aussagekraft der Animationen durch Stylisierung: Die gezielte Anwendung von Effekten und Stylisierungstechniken kann die Aussagekraft und Wirkung von Animationen deutlich verstärken.

Ausblick auf die nächsten Steps

In den kommenden Wochen werden folgende Themenbereiche vertieft:

  • Loops: Animationen wiederholen und nahtlose Loops erstellen: Loops ermöglichen die Wiederholung von Animationen in einem geschlossenen Kreislauf, wodurch hypnotische und dynamische Effekte erzielt werden können.
  • Textanimation: Texte lebendig gestalten und mit Animationen versehen: Textanimationen bieten kreative Möglichkeiten, um Texte dynamisch und aufmerksamkeitsstark zu präsentieren.
  • Morphing: Formen nahtlos ineinander übergehen lassen: Morphing ermöglicht die flüssige Transformation einer Form in eine andere, wodurch beeindruckende visuelle Effekte entstehen können.
  • Effekte & Stylisierung Teil 2: Fortgeschrittene Effekte und Techniken für kreative Looks: In diesem Bereich werden fortgeschrittene Effekte und Techniken zur kreativen Gestaltung und Stylisierung von Animationen erlernt.

Fazit

Die ersten sechs Steps haben ein solides Fundament in den grundlegenden Techniken des Motion Designs mit After Effects gelegt. Das Erlernen der wichtigsten Werkzeuge, Funktionen und Techniken ermöglicht die Erstellung kreativer und ausdrucksstarker Animationen. Die vielfältigen Möglichkeiten der Software bieten ein großes Potenzial für die visuelle Kommunikation und das Storytelling.

COSMIC: AI music generation

COSMIC, described in a 2021 NIME publication, is a music tool that
brings the concept of AI-assisted creativity into the field of
music, much like ChatGPT does for text. It's designed to make music
composition more accessible, including those who may not have a lot
of knowledge around creating music. By using natural language
processing, COSMIC understands user inputs and converts them into
music in a generative way.
The tool's functionality is similar to that of ChatGPT in several
ways. Both are powered by AI to interpret human input and generate
creative output, though COSMIC solely focuses on music. This
parallel raises important considerations about the future of
creative professions. As AI technologies like COSMIC and ChatGPT
become more advanced, they could potentially change traditional
roles in their respective fields.
One key aspect of COSMIC is its potential to create a new type of
musician — one who can use AI to bring unique ideas to life without
needing to master instruments or composition theory. This could lead
to more innovative and diverse musical creations, since individuals
from various backgrounds could contribute their unique view and
creativity to music.
However, the rise of AI tools in creative industries also opens up
discussions about the authenticity of art. When an AI contributes
significantly to the process, it may challenge our traditional views
on authorship and creativity. Are the final products really human
creations, or are they collaborations between humans and machines?
And if it's a collaboration, how should we credit each contributor's
role?
Despite these points, the integration of AI like COSMIC in music
creation also sounds exciting. It gives a glimpse into a future
where anyone with an idea can explore music creation, potentially
leading to a richer, more varied musical landscape. Also, it might
inspire similar innovations in other creative fields.

Afraid to own a body

This block aims to reflect on the Afraid to Own a Body composition by Lena Meinhardt and Eva DörrIt, which is a fixed media composition based on a poem by Emily Dickinson from around 1866. Eight channels were used for the performance, making the whole experience very immersive, and IRCAM’s Rave software was used in order to generate their sound material. Rave offers models for realtime deep learning based processing, and the model they trained for this piece learned both of their voice timbres, melting them together and generating a new modeled voice. The artists even mentioned they wanted to create something that would develop its own “consciousness”, as a reference to the artificial intelligence capabilities of the mentioned engine, and also a result of various hours feeding voice material to it, including a full spectrum of textures and also intelligible material. Max MSP was used for much of the processing, where the various sound files would be fed into the models. Regarding the composition, it was intended to contrast something that develops and then falls apart again, which matched with their use of intelligible sentences from the poem, such as the often heard “I am afraid to own a body” sentence compared to completely unintelligible voice material that was generated by the artificial intelligence. The whole sonic experience really suited the original poem, and it was a good representation of artificial intelligence capabilities applied to sound compositions.

A Generative Audio Synthesis Workshop

This block aims to reflect on the Generative Audio Synthesis Workshop, where various new plugins from Audiolab were presented by Sina Bokesoy. One that specifically caught my attention was the Oceanic plugin, in which VST parameters can be controlled by 5 different superimposed waves. This waves have their motion determined by Gerstner’s trochoidales wave solutions, that model ocean water on a physical basis. The screenshot below represents the main engine of the plugin, which doesn’t generate sound, but can easily be coupled with other sound generating instruments. The resulting sounds were very enjoyable and interesting textures could be created by manipulating the waves direction, steepness and also its length. The engine concept aims to replicate sea motion, this natural phenomena which in itself never sounds boring, as one can listen to it for hours, to synthesizer motion. 

Other plugins were presented, such as the Thermo 2, where sonic textures could be created. The engine has a controlled entropy state which we can alter and increase instability. Fundamental 3 was another great plugin, inspired by Karlheinz Stockhausen, where test equipment related sounds could be used to generate unique synth textures.

Suit Your Body

During the Digithalia Festival – a festival for virtual theatre forms at Schauspielhaus Graz – we had the chance to watch Suit Your Body, a theatre computer game by Frauen und Fiktion (Berlin).

I’m often in places where people talk about „overweight“, where I ask: what „over“ are we talking about? „Over desirable“, „over morally good“, „over-…er“? Why is this „over“ necessary? – Natalie Rosenke

In Suit Your Body, a space is created where all the bodies are welcome. In an ethereal-looking like sauna, the game allows the player to navigate it, while being accompanied by a sonic narrative based on an interview with Natalie Rosenke, an expert on weight discrimination. The excerpt mentioned above represents the core of this project, where a new perspective on body beauty standards is practiced. Lots of questions are asked during the reflective monologue, reflecting on the roots of body discrimination. Why does the word fat have such a bad connotation? Why do we use the words corpulent or curvy instead of fat? The speaker embraces the term, questioning its connotation. 

Regarding the game production, the environment has the dynamic aspect of being able to move around the space, which changes according to day time – there are changes in light and also reflections – and regarding sound there are footsteps triggered when the character walks, as well as various ambient sounds, including water and sauna elements that would be reproduced at specific times, really matching the scenes. The sonic narrative, which was also produced as a radio play, seems to be linear and apart from the interview, there are also beautiful choirs that support the dialogue.

Decoding Bias

During the Digithalia Festival – a festival for virtual theatre forms at Schauspielhaus Graz – we had the chance to watch Decoding Bias, a multi-channel spatial video and sound installation.

The concept of this installation consists in having eight different types of Artificial Intelligences (AI’s) attending group therapy sessions. As the play starts, they encounter themselves in one of these sessions, and begin to express how their days have been, similarly to human therapy sessions. However, as the AI’s are built for different purposes, they all have different backgrounds and often disagree, creating lots of room for discussion which makes up most of the play. With these discussions, they are able to reflect upon the fact that their “thought patterns” come back to their human-made datasets, which in its core are often inaccurate, incomplete or incompatible, failing to represent the entire population. One of the biases that was discussed was regarding gender and racial stereotypes, where one of the AI’s couldn’t consider a black woman for a high status, leading work position, just because of being a black woman. 

Amidst other discussions, desert yoga and dancing sessions, this piece offers a speculative future scenario that enlightens the viewer and suggests reflection on this very present and sensitive topic, that might, or is already, affecting us humans.

Regarding the sound experience, I consider that the piece really benefited from the spatial aspect, as the viewers sit at the middle or next to the AI characters, which are positioned in the form of a circle and each voice is located at the respective AI position, making the whole experience more immersive. The music and sound compositions were also surrounding the viewer, with effects such as delay going through all the speakers and creating a great experience.

Language of shapes

„Shapes in themselves have their own language.“

Maddy Zoli

With this quote, I wanted to begin my journey for this semester’s project for Design and Research. I have finally made a decision to pursue something that I have wanted to do for a very long time – creating my very own character/character design.

As I have a passion for drawing and illustration, I want to use the knowledge that I’ve already acquired and expand it with new information. In the end, I hope that this will lead me to a point where I have developed something that I can use for further projects in the future, possibly even for my Master’s project.

For every new project, the first step is always research. So, I searched for character design, or to be more precise „how to develop a character design“ on Google, which led me to the article for my first blog post.

In the article, „7 character design tips to make your design stand out“, by Lavinia Aparaschivei, you get a better overview of how to develop a character design from scratch. Plus you get a better understanding of what you must consider when setting up your character in its natural environment.

Something that caught my interest and gave me a chuckle was the fact, that some of the things mentioned in the article were kinda covered in the first weeks of the ongoing semester at FH. For example, when creating a character, consider their role in the universe or the place they live. Ask yourself questions about their purpose in the story, their role in society, why is the character acting the way it is, or what is their current life situation like and some more. You have the freedom to expand your character’s backstory as much as you want by adding more substance, details, or information.

Thinking about the origin, personal goals, interests as well as weaknesses, personal problems, and daily struggles of your hero can also be useful.

Before beginning the actual design work, I will first focus on setting up the backstory and giving more attention to the world and universe in which my character will exist.

So my next steps include thinking about what my character actually will be like, writing down some first ideas, and starting to create the world and universe it will live in.

I plan to start by creating a mood board, sketching out some ideas, and collecting inspirations that will help me define the character and their surroundings.

I will also do some further research and collect some more information. And I am curious to see where this will lead me to.

I will see you in my next article. Take care and have a great day! 😉

Sources

https://www.linearity.io/blog/character-design/