19 | User Testing and Concerns 🌱

🟦 09_Learnings & Next Steps

From exploring audio-reactive visuals in TouchDesigner to conceptualizing and designing a simple app prototype in Figma, it was so much fun! It all began with a curiosity about how different languages visually manifest through sound, leading me to experiment with various tools and techniques.

I am so happy that I dived into TouchDesigner a bit more I learned a lot about creating and manipulating visuals, especially making them respond to audio inputs which was very fun to learn. Also Along the way, I listened to the German alphabet countless times, which was unexpectedly useful (and quite amusing🤭)

Transitioning to Figma, I designed a mini prototype for a pronunciation visualizer app. This app allows users to select their native language and a language they wish to learn, providing real-time visual feedback on pronunciation patterns. The thing is, I had so many ideas for using the visuals in a practical application. Ultimately, I focused on perfecting the visuals, which was both challenging and rewarding.

Next Step

Looking ahead, I see a lot of potential for expanding this project. For example a Language Learning Kit with developing a comprehensive language learning kit using these types of visuals could make learning more engaging. Different visuals for different languages might be interesting. Or in the testing phase, it would be interesting to test the visuals with deaf individuals to see if they can detect patterns in each language. This could open up new ways of understanding and teaching languages. So I’m excited about the possibilities and let’s see if I will work on more on this project in the future!

🟦 06_First Testing with AI

After making the visuals audio-reactive with alphabet audios, I used AI to see if there were any noticeable changes when it comes to words and sentences.

I wanted to test phrases like „hello“ and „how are you?“ in various languages like: German, English, Turkish, Russian, Hindi, and Chinese. Unfortunately, the AI tools I found were either not free or had limited trials, so I didn’t have many options. I chose one AI voice tool and one voice actor for each sentence in the different languages.

Prompts for AI

  • English: „Hello, how are you?“
  • German: „Hallo, wie geht’s?“
  • Turkish: „Merhaba, nasılsın?“
  • Russian: „Привет, как дела?“ (Privet, kak dela?)
  • Chinese (Mandarin): 你好,你好吗?(Nǐ hǎo, nǐ hǎo ma?)
  • Hindi: „नमस्ते, आप कैसे हैं?“ (Namaste, aap kaise hain?)

And here are few examples of visualization „Hello, how are you?“ :

Turkish: „Merhaba, nasılsın?“
German: „Hallo, wie geht’s?“
Chinese (Mandarin): 你好,你好吗?(Nǐ hǎo, nǐ hǎo ma?)

It was kind of hard to realize the differences between languages just by looking at the visuals. However, the process was very interesting for me. I could see subtle variations in the patterns and reactions of the visuals to different sounds.

Next Step

In the next steps, I will analyze the outcomes in more detail. I want to see if I can pinpoint specific differences and understand how each language influences the visuals.

🟦 05_Audio Reactivity

I am still experimenting the visuals but now, I will share how I made them audio-reactive in TouchDesigner.

I followed a tutorial and then downloaded Turkish and German alphabet audios to see how the visuals change with different letters. By working with individual letters, I can easily extend this to words and sentences later.

First Visual – Audio Reactive

I tried to make the first visual audio-reactive. I think this one fits best because it looks like an audio wave and a weird mouth. The lines are clear, so I can record or take screenshots of each letter. This clarity can help in analyzing it.

Applying Audio Reactivity to Other Visuals

After figuring out how to make the first visual audio-reactive, I applied the same process to the other visuals. Here are the results:

Next Step

I think I will go with the mirrored first visual. My next step is to use an AI voice and see how it looks with words and sentences. I’m excited to see how these visuals evolve and become more refined as I continue experimenting.

🟦 04_Visualization Samples

After creating my first visual, I wanted to experiment more with TouchDesigner and come up with different visuals.

Mirror Effect on the First Visual

First, I changed the initial visual by applying a mirror effect. I aimed to create clear lines that resemble an audio wave. I liked that it looks like a mouth, and I am happy with the result. However, I wanted to try more variations.

Second Visual with Particles

For the second visual, I followed a tutorial on working with particles. While I enjoyed this process, the result felt too random for my project. I realized that I need more control over the particle movements to fit my theme better.

Mirror Effect on the Second Visual

Lastly, I mirrored the second visual. This resulted in too many mirrored sections and small particles. Despite this, I loved the outcomes and really enjoyed experimenting with TouchDesigner.

Next Step

In next step, I plan to make all of these visuals audio-reactive and then choose the one that fits my project best. I am having a lot of fun experimenting with TouchDesigner, I love itttt!

🟦 03_Diving into TouchDesigner

After deciding on my topic, I gathered some tutorials from YouTube to help with my data visualization project. I aimed to create something using lines so I could showcase the differences between various languages.

Moodboard for visualization

I believe using line shapes will effectively highlight the audio differences between different language inputs. Lines also have a clean, aesthetic look that I think will result in a visually appealing project. However, with my limited knowledge in TouchDesigner, I am not sure if I can achieve the exact outcome I envision. Therefore, I will experiment with some tutorials and see which ones work best. I’m really excited to dive into TouchDesigner and learn more!

First Attempt – Successful!

I followed this tutorial and found the beginning challenging because it focused on making the project audio-reactive from the start. My first step, however, was to create the visual aspect. The initial outcome didn’t quite meet my expectations as it resulted in a line that moved randomly. Randomness might not be ideal for my project since I want to compare different letters clearly.

Above, you can see my initial outcome. With some modifications and added audio reactivity, it might work for the prototype phase. I’m satisfied with the progress so far, but I plan to either refine this outcome further or try other tutorials. I believe having multiple visualization options will be beneficial during the testing phase.

Next Step

I will continue working through more tutorials, focusing on making the visualizations audio-reactive. Then, I’ll choose the one that fits the best! My goal is to create a dynamic, clear and aesthetic visualization that shows the differences between various languages through their audio characteristics.