After making the visuals audio-reactive with alphabet audios, I used AI to see if there were any noticeable changes when it comes to words and sentences.
I wanted to test phrases like „hello“ and „how are you?“ in various languages like: German, English, Turkish, Russian, Hindi, and Chinese. Unfortunately, the AI tools I found were either not free or had limited trials, so I didn’t have many options. I chose one AI voice tool and one voice actor for each sentence in the different languages.
Prompts for AI
- English: „Hello, how are you?“
- German: „Hallo, wie geht’s?“
- Turkish: „Merhaba, nasılsın?“
- Russian: „Привет, как дела?“ (Privet, kak dela?)
- Chinese (Mandarin): 你好,你好吗?(Nǐ hǎo, nǐ hǎo ma?)
- Hindi: „नमस्ते, आप कैसे हैं?“ (Namaste, aap kaise hain?)
And here are few examples of visualization „Hello, how are you?“ :
It was kind of hard to realize the differences between languages just by looking at the visuals. However, the process was very interesting for me. I could see subtle variations in the patterns and reactions of the visuals to different sounds.
Next Step
In the next steps, I will analyze the outcomes in more detail. I want to see if I can pinpoint specific differences and understand how each language influences the visuals.