→ Impulse_06 | Tutorial (TouchDesigner)

For my sixth Impulse, I wanted to try something online. I was hoping to join a free TouchDesigner workshop, but I couldn’t find one that worked with my schedule. Instead, I decided to follow a tutorial to help me with the voice interaction I’m working on for my thesis.

I chose the Abstract Speech Visualisation tutorial by Bileam Tschepe (Elekktronaut). His explanations were easy to follow, and the way he set up the audio interaction was really inspiring. He broke everything down step by step, which made it much easier to understand, especially for someone like me who’s still learning.

I then followed the tutorial and recreated the patch in TouchDesigner. I played around with some of the parameters and experimented with the settings to see how small changes could affect the visuals. It was interesting to see how different audio inputs created various effects, and I had fun testing different combinations. Below is a recording of my experiment with the patch.

  • Audio Analysis in TouchDesigner – The tutorial showed how to break down audio into frequencies and amplitudes. This was very useful for understanding how sound can control visual elements.
  • Creating Dynamic Visuals: I learned how to use particle systems and shapes to make visuals that respond directly to sound.
  • Visualizing a language in a new way, like the Heptapod logograms, can make us question our usual way of thinking about communication.

So this tutorial helped me to practice TouchDesigner to create visuals that react to voice and sound better. Also, it is very important for my thesis, as I’m trying to visualize the unique characteristics of different languages.

→ Watch the Tutorial: Abstract Speech Visualisation by Bileam Tschepe
→ Learn more about TouchDesigner: TouchDesigner Official Website

→ Impulse_03 | Podcast (Visual Cast)

For my third impulse, I watched a podcast episode from Visual Cast featuring Jascha Suess, a very talented VJ who has worked on many projects with well-known DJs. I follow his work on Instagram, so I was curious to hear about the process and stories behind it.

Even though the podcast focused on VJing, it gave me new ideas for my own project about language visualizations. Jascha shared how he uses TouchDesigner to create visuals and build interactive systems. Hearing this made me realize again how powerful TouchDesigner is, and it inspired me to explore it even more.

One thing that stood out to me was how Jascha builds entire UIs and patches in TouchDesigner. He talked about how flexible and creative the software is, which is something I’ve started to experience in my own experiments. It’s exciting to see someone use it at such a high level, and it motivates me to keep learning.

Jascha mentioned that he isn’t a programmer and doesn’t write much code, but he loves working with TouchDesigner’s node-based interface. He finds it easier and more intuitive than traditional coding, and he said it allows him to focus more on creativity. This made me feel more confident because I also don’t have strong coding skills, but I can still create complex systems using nodes.

While the podcast was about VJing, it gave me fresh ideas for visualizing languages. Jascha explained how he connects inputs like music or motion to create visuals that react in real-time. This made me think about how I could make my project more interactive. For example, instead of static visuals, I could create a setup where users speak into a microphone, and the visuals change based on the sounds of their voice.

He also talked about organizing projects into smaller steps. He starts with simple patches to test ideas and then builds on them. This approach feels very practical, and I plan to try it in my own workflow.

Conclusion

Watching this podcast helped me see new possibilities in my work. Jascha’s approach to using TouchDesigner is creative and inspiring, and I want to dive deeper into what the software can do. I also learned that even without coding expertise, it’s possible to create complex and meaningful projects by focusing on the tools and workflows that work best for me.

→ Link of the podcast: https://www.youtube.com/watch?v=MWsk_JaCiew&t=2s

18 | New Visuals 🌱

🟦 09_Learnings & Next Steps

From exploring audio-reactive visuals in TouchDesigner to conceptualizing and designing a simple app prototype in Figma, it was so much fun! It all began with a curiosity about how different languages visually manifest through sound, leading me to experiment with various tools and techniques.

I am so happy that I dived into TouchDesigner a bit more I learned a lot about creating and manipulating visuals, especially making them respond to audio inputs which was very fun to learn. Also Along the way, I listened to the German alphabet countless times, which was unexpectedly useful (and quite amusing🤭)

Transitioning to Figma, I designed a mini prototype for a pronunciation visualizer app. This app allows users to select their native language and a language they wish to learn, providing real-time visual feedback on pronunciation patterns. The thing is, I had so many ideas for using the visuals in a practical application. Ultimately, I focused on perfecting the visuals, which was both challenging and rewarding.

Next Step

Looking ahead, I see a lot of potential for expanding this project. For example a Language Learning Kit with developing a comprehensive language learning kit using these types of visuals could make learning more engaging. Different visuals for different languages might be interesting. Or in the testing phase, it would be interesting to test the visuals with deaf individuals to see if they can detect patterns in each language. This could open up new ways of understanding and teaching languages. So I’m excited about the possibilities and let’s see if I will work on more on this project in the future!

🟦 08_Prototype

After analyzing the audio-reactive visuals, I wondered how I could apply these outcomes practically. While creating an app might seem like an easy route, I wanted to explore how it would look and function in this context.

App Prototype

I designed a demo app prototype using Figma:

  1. Home Screen: Native Language Selection
    • Users start by selecting their native language.
  2. Language Learning Selection
    • Next, users choose the language they want to learn. This screen ensures that the app tailors the visualizations and pronunciation guides to the selected language.
  3. Start Pronunciation Practice
    • The final page displays the data visualized video that I made in TouchDesigner. There is a simple prompt: „Hold to Pronounce.“ This allows users to practice pronunciation and see the visual feedback in real-time.

I made a quick prototype because for me the process and learnings from TouchDesigner was more challenging and interesting.

Next Step

I plan to write about my learnings and reflect on the overall process.

🟦 07_Analysis of Outcome

After experimenting with AI voices and creating audio-reactive visuals for different languages, the next step is to analyze the outcomes in detail. Here’s how I approached the analysis and what I discovered.

Collecting and Comparing Data

First, I recorded of the visuals for each language and phrase under consistent conditions. By placing these visuals side by side, I could directly compare them. I looked for patterns, shapes, and movements unique to each language, paying special attention to how the visuals reacted to specific sounds, particularly vowels and consonants.

Observations and Differences

During my analysis, I noted some differences:

  • German and English: Both languages produced clear, structured lines. However, German visuals had slightly more detailed patterns.
  • Turkish: Turkish visuals showed more fluid and continuous lines, reflecting its smooth flow and connected structure.
  • Russian: Russian visuals had a mix of sharp and smooth lines, showing its varied sounds and pronunciation.
  • Hindi: Hindi visuals displayed intricate and dense patterns, likely due to its mix of different sounds.
  • Chinese: Chinese visuals featured rhythmic, wave-like patterns, capturing the tones and unique structure of the language.

Here are some examples of the outcomes for „hello“ and „how are you?“:

  • German: „Hallo, wie geht’s?“ – Clear and structured lines with some complexity.
  • English: „Hello, how are you?“ – Similar to German but slightly less complex.
  • Turkish: „Merhaba, nasılsın?“ – Fluid lines, continuous patterns.
  • Russian: „Привет, как дела?“ – Mix of sharp and smooth lines.
  • Hindi: „नमस्ते, आप कैसे हैं?“ – Dense, intricate patterns.
  • Chinese: „你好, 你好吗?“ – Rhythmic, undulating patterns.

Next Step

Analyzing the audio-reactive visuals revealed subtle yet fascinating differences between languages. Each language’s phonetic characteristics influenced the visuals in unique ways. Moving forward, I want to gather some feedback from people around me where these visuals can be used and what they think about the visuals so far.

🟦 06_First Testing with AI

After making the visuals audio-reactive with alphabet audios, I used AI to see if there were any noticeable changes when it comes to words and sentences.

I wanted to test phrases like „hello“ and „how are you?“ in various languages like: German, English, Turkish, Russian, Hindi, and Chinese. Unfortunately, the AI tools I found were either not free or had limited trials, so I didn’t have many options. I chose one AI voice tool and one voice actor for each sentence in the different languages.

Prompts for AI

  • English: „Hello, how are you?“
  • German: „Hallo, wie geht’s?“
  • Turkish: „Merhaba, nasılsın?“
  • Russian: „Привет, как дела?“ (Privet, kak dela?)
  • Chinese (Mandarin): 你好,你好吗?(Nǐ hǎo, nǐ hǎo ma?)
  • Hindi: „नमस्ते, आप कैसे हैं?“ (Namaste, aap kaise hain?)

And here are few examples of visualization „Hello, how are you?“ :

Turkish: „Merhaba, nasılsın?“
German: „Hallo, wie geht’s?“
Chinese (Mandarin): 你好,你好吗?(Nǐ hǎo, nǐ hǎo ma?)

It was kind of hard to realize the differences between languages just by looking at the visuals. However, the process was very interesting for me. I could see subtle variations in the patterns and reactions of the visuals to different sounds.

Next Step

In the next steps, I will analyze the outcomes in more detail. I want to see if I can pinpoint specific differences and understand how each language influences the visuals.

🟦 05_Audio Reactivity

I am still experimenting the visuals but now, I will share how I made them audio-reactive in TouchDesigner.

I followed a tutorial and then downloaded Turkish and German alphabet audios to see how the visuals change with different letters. By working with individual letters, I can easily extend this to words and sentences later.

First Visual – Audio Reactive

I tried to make the first visual audio-reactive. I think this one fits best because it looks like an audio wave and a weird mouth. The lines are clear, so I can record or take screenshots of each letter. This clarity can help in analyzing it.

Applying Audio Reactivity to Other Visuals

After figuring out how to make the first visual audio-reactive, I applied the same process to the other visuals. Here are the results:

Next Step

I think I will go with the mirrored first visual. My next step is to use an AI voice and see how it looks with words and sentences. I’m excited to see how these visuals evolve and become more refined as I continue experimenting.

🟦 04_Visualization Samples

After creating my first visual, I wanted to experiment more with TouchDesigner and come up with different visuals.

Mirror Effect on the First Visual

First, I changed the initial visual by applying a mirror effect. I aimed to create clear lines that resemble an audio wave. I liked that it looks like a mouth, and I am happy with the result. However, I wanted to try more variations.

Second Visual with Particles

For the second visual, I followed a tutorial on working with particles. While I enjoyed this process, the result felt too random for my project. I realized that I need more control over the particle movements to fit my theme better.

Mirror Effect on the Second Visual

Lastly, I mirrored the second visual. This resulted in too many mirrored sections and small particles. Despite this, I loved the outcomes and really enjoyed experimenting with TouchDesigner.

Next Step

In next step, I plan to make all of these visuals audio-reactive and then choose the one that fits my project best. I am having a lot of fun experimenting with TouchDesigner, I love itttt!