18 | New Visuals 🌱

🟦 09_Learnings & Next Steps

From exploring audio-reactive visuals in TouchDesigner to conceptualizing and designing a simple app prototype in Figma, it was so much fun! It all began with a curiosity about how different languages visually manifest through sound, leading me to experiment with various tools and techniques.

I am so happy that I dived into TouchDesigner a bit more I learned a lot about creating and manipulating visuals, especially making them respond to audio inputs which was very fun to learn. Also Along the way, I listened to the German alphabet countless times, which was unexpectedly useful (and quite amusing🤭)

Transitioning to Figma, I designed a mini prototype for a pronunciation visualizer app. This app allows users to select their native language and a language they wish to learn, providing real-time visual feedback on pronunciation patterns. The thing is, I had so many ideas for using the visuals in a practical application. Ultimately, I focused on perfecting the visuals, which was both challenging and rewarding.

Next Step

Looking ahead, I see a lot of potential for expanding this project. For example a Language Learning Kit with developing a comprehensive language learning kit using these types of visuals could make learning more engaging. Different visuals for different languages might be interesting. Or in the testing phase, it would be interesting to test the visuals with deaf individuals to see if they can detect patterns in each language. This could open up new ways of understanding and teaching languages. So I’m excited about the possibilities and let’s see if I will work on more on this project in the future!

🟦 08_Prototype

After analyzing the audio-reactive visuals, I wondered how I could apply these outcomes practically. While creating an app might seem like an easy route, I wanted to explore how it would look and function in this context.

App Prototype

I designed a demo app prototype using Figma:

  1. Home Screen: Native Language Selection
    • Users start by selecting their native language.
  2. Language Learning Selection
    • Next, users choose the language they want to learn. This screen ensures that the app tailors the visualizations and pronunciation guides to the selected language.
  3. Start Pronunciation Practice
    • The final page displays the data visualized video that I made in TouchDesigner. There is a simple prompt: „Hold to Pronounce.“ This allows users to practice pronunciation and see the visual feedback in real-time.

I made a quick prototype because for me the process and learnings from TouchDesigner was more challenging and interesting.

Next Step

I plan to write about my learnings and reflect on the overall process.

🟦 07_Analysis of Outcome

After experimenting with AI voices and creating audio-reactive visuals for different languages, the next step is to analyze the outcomes in detail. Here’s how I approached the analysis and what I discovered.

Collecting and Comparing Data

First, I recorded of the visuals for each language and phrase under consistent conditions. By placing these visuals side by side, I could directly compare them. I looked for patterns, shapes, and movements unique to each language, paying special attention to how the visuals reacted to specific sounds, particularly vowels and consonants.

Observations and Differences

During my analysis, I noted some differences:

  • German and English: Both languages produced clear, structured lines. However, German visuals had slightly more detailed patterns.
  • Turkish: Turkish visuals showed more fluid and continuous lines, reflecting its smooth flow and connected structure.
  • Russian: Russian visuals had a mix of sharp and smooth lines, showing its varied sounds and pronunciation.
  • Hindi: Hindi visuals displayed intricate and dense patterns, likely due to its mix of different sounds.
  • Chinese: Chinese visuals featured rhythmic, wave-like patterns, capturing the tones and unique structure of the language.

Here are some examples of the outcomes for „hello“ and „how are you?“:

  • German: „Hallo, wie geht’s?“ – Clear and structured lines with some complexity.
  • English: „Hello, how are you?“ – Similar to German but slightly less complex.
  • Turkish: „Merhaba, nasılsın?“ – Fluid lines, continuous patterns.
  • Russian: „Привет, как дела?“ – Mix of sharp and smooth lines.
  • Hindi: „नमस्ते, आप कैसे हैं?“ – Dense, intricate patterns.
  • Chinese: „你好, 你好吗?“ – Rhythmic, undulating patterns.

Next Step

Analyzing the audio-reactive visuals revealed subtle yet fascinating differences between languages. Each language’s phonetic characteristics influenced the visuals in unique ways. Moving forward, I want to gather some feedback from people around me where these visuals can be used and what they think about the visuals so far.

🟦 06_First Testing with AI

After making the visuals audio-reactive with alphabet audios, I used AI to see if there were any noticeable changes when it comes to words and sentences.

I wanted to test phrases like „hello“ and „how are you?“ in various languages like: German, English, Turkish, Russian, Hindi, and Chinese. Unfortunately, the AI tools I found were either not free or had limited trials, so I didn’t have many options. I chose one AI voice tool and one voice actor for each sentence in the different languages.

Prompts for AI

  • English: „Hello, how are you?“
  • German: „Hallo, wie geht’s?“
  • Turkish: „Merhaba, nasılsın?“
  • Russian: „Привет, как дела?“ (Privet, kak dela?)
  • Chinese (Mandarin): 你好,你好吗?(Nǐ hǎo, nǐ hǎo ma?)
  • Hindi: „नमस्ते, आप कैसे हैं?“ (Namaste, aap kaise hain?)

And here are few examples of visualization „Hello, how are you?“ :

Turkish: „Merhaba, nasılsın?“
German: „Hallo, wie geht’s?“
Chinese (Mandarin): 你好,你好吗?(Nǐ hǎo, nǐ hǎo ma?)

It was kind of hard to realize the differences between languages just by looking at the visuals. However, the process was very interesting for me. I could see subtle variations in the patterns and reactions of the visuals to different sounds.

Next Step

In the next steps, I will analyze the outcomes in more detail. I want to see if I can pinpoint specific differences and understand how each language influences the visuals.

🟦 05_Audio Reactivity

I am still experimenting the visuals but now, I will share how I made them audio-reactive in TouchDesigner.

I followed a tutorial and then downloaded Turkish and German alphabet audios to see how the visuals change with different letters. By working with individual letters, I can easily extend this to words and sentences later.

First Visual – Audio Reactive

I tried to make the first visual audio-reactive. I think this one fits best because it looks like an audio wave and a weird mouth. The lines are clear, so I can record or take screenshots of each letter. This clarity can help in analyzing it.

Applying Audio Reactivity to Other Visuals

After figuring out how to make the first visual audio-reactive, I applied the same process to the other visuals. Here are the results:

Next Step

I think I will go with the mirrored first visual. My next step is to use an AI voice and see how it looks with words and sentences. I’m excited to see how these visuals evolve and become more refined as I continue experimenting.

🟦 04_Visualization Samples

After creating my first visual, I wanted to experiment more with TouchDesigner and come up with different visuals.

Mirror Effect on the First Visual

First, I changed the initial visual by applying a mirror effect. I aimed to create clear lines that resemble an audio wave. I liked that it looks like a mouth, and I am happy with the result. However, I wanted to try more variations.

Second Visual with Particles

For the second visual, I followed a tutorial on working with particles. While I enjoyed this process, the result felt too random for my project. I realized that I need more control over the particle movements to fit my theme better.

Mirror Effect on the Second Visual

Lastly, I mirrored the second visual. This resulted in too many mirrored sections and small particles. Despite this, I loved the outcomes and really enjoyed experimenting with TouchDesigner.

Next Step

In next step, I plan to make all of these visuals audio-reactive and then choose the one that fits my project best. I am having a lot of fun experimenting with TouchDesigner, I love itttt!

🟦 03_Diving into TouchDesigner

After deciding on my topic, I gathered some tutorials from YouTube to help with my data visualization project. I aimed to create something using lines so I could showcase the differences between various languages.

Moodboard for visualization

I believe using line shapes will effectively highlight the audio differences between different language inputs. Lines also have a clean, aesthetic look that I think will result in a visually appealing project. However, with my limited knowledge in TouchDesigner, I am not sure if I can achieve the exact outcome I envision. Therefore, I will experiment with some tutorials and see which ones work best. I’m really excited to dive into TouchDesigner and learn more!

First Attempt – Successful!

I followed this tutorial and found the beginning challenging because it focused on making the project audio-reactive from the start. My first step, however, was to create the visual aspect. The initial outcome didn’t quite meet my expectations as it resulted in a line that moved randomly. Randomness might not be ideal for my project since I want to compare different letters clearly.

Above, you can see my initial outcome. With some modifications and added audio reactivity, it might work for the prototype phase. I’m satisfied with the progress so far, but I plan to either refine this outcome further or try other tutorials. I believe having multiple visualization options will be beneficial during the testing phase.

Next Step

I will continue working through more tutorials, focusing on making the visualizations audio-reactive. Then, I’ll choose the one that fits the best! My goal is to create a dynamic, clear and aesthetic visualization that shows the differences between various languages through their audio characteristics.

16 | Arduino with TouchDesigner 🌱

Arduino Controller made of an ESP8266 12-F board, an HW-040 Rotary Encoder, and a 3D-printed knob.
Arduino Controller made of an ESP8266 12-F board, an HW-040 Rotary Encoder, and a 3D-printed knob.
TouchDesigner Visuals (left to right: good state, neutral state, bad state)
TouchDesigner Visuals (left to right: good state, neutral state, bad state)