This week was all about feedback. After months of deep diving into research, I had the chance to discuss my master’s thesis progress with three different experts, each offering a unique perspective on my work. These conversations helped me reflect on where I am, what I’ve accomplished so far, and most importantly where I should go next.
First Round: Structuring the Next Steps
On Wednesday, I had a meeting with Ms. Ursula Lagger, who guided us through our master’s thesis proseminar this semester. Our conversation focused on my exposé, my current research state, and my plans moving forward. While I have already done a lot of research on the theoretical background, she emphasized that now is the time to shift towards the practical aspects of my work. One of the biggest takeaways from this meeting was the importance of structuring my prototyping phase. She encouraged me to make a clear plan on how and when I will move from expert interviews to practical examples, prototyping, testing, and iteration. Given the timeframe of our thesis, having a structured roadmap will help me stay on track and make the most of the time I have left. This feedback was a great reminder that while research is essential, it needs to be paired with practical application.
Second Round: Expanding My Perspective
Thursday’s meeting with Mr. Horst Hörtner from Ars Electronica Futurelab provided a completely different perspective. We talked about my passion for universal design, which has been a key motivation behind my thesis. He introduced me to companies that develop products for the medical field and have successfully conducted medical trials, as well as projects designed with autistic people in mind. Beyond technical guidance, he gave me valuable pointers on how to approach expert interviews and tell the story behind my research. He encouraged me to clearly define why this field of design is important to me and how my work connects to real-world problems. This discussion gave me a lot of insights into the bigger picture of universal design, showing me new opportunities for research and development in this space. More than that, it reinforced the importance of being passionate about what I’m designing.
Third Round: Bringing Ideas to Life
Today, I had a meeting with Mr. Kaltenbrunner from the University of Art and Design in Linz, who is also a co-founder of Reactable Systems, one of the inspirations for my last year’s prototype for design and research. Our conversation revolved around tangible user interfaces and how they could be used for children with autism. He showed me several existing projects for autistic children, which immediately reignited my interest in creating an interactive school table. We talked about the best way to start working on this idea, and he suggested that my first focus should be on designing the UI for the interface, essentially starting with a digital app before thinking about how to integrate tangible interaction. One concept that stood out from our discussion was fictional design, a method that encourages focusing on the concept and complexity of interactions first, rather than getting stuck on the technological limitations. Given the limited timeframe of my thesis, this approach makes a lot of sense. Instead of trying to perfect the hardware immediately, I should develop the experience and interactions first, then later explore how to make them tangible. This conversation was incredibly valuable because it helped me redefine my next steps. Instead of jumping straight into prototyping the hardware, I will first develop the digital interface, refine the user experience, and then gradually explore physical interactions.
These three rounds of feedback helped me gain clarity on my direction. Moving forward, I now have a clear structure for my thesis work:
Finalize my research phase by conducting a few more expert interviews, now with a clearer understanding of what insights I need.
Develop a structured plan for my prototyping phase, breaking it down into manageable steps.
Start with digital prototyping, designing an interactive learning tool that can later be explored for tangible interaction.
Use the concept of fictional design to refine my ideas, focusing on how the experience should feel before worrying about the technical aspects.
After my long and confusing ideation process, I had my 15-minutes talk with Birgit Bachler that really got me thinking about my topic and helped to make my mind clear in some points.
My notes from her Feedback:
She highlighted how important it is to look at things from different angles and use unexpected changes to keep it interesting. Birgit also suggested starting with what I personally care about, as this makes my writing more genuine and relatable.
We talked about asking „what if“ questions to explore new ideas, which can bring more depth to my research and make it more interesting. She also stressed the importance of knowing exactly what message I want to share—this clarity helps me write more effectively. Overall, my talk with Birgit has inspired me to approach the project in a more creative and focused way.
Choosing Topic:
After my conversation with Birgit, I realized that I was following a path of thought from the general to the personal, with her feedback, I turned inward and thought about my own interests and problems, thinking about the personal and the general helped me and I decided on my topic!! FINALLY!
Project Description:
I’ve been thinking about how I’m learning German, hearing different languages in class and around the city, noticing the varied and sometimes funny ways words are spoken. It’s interesting to see these things from an artistic angle.
So, my current project is to visualize the difference and similarities of audios from different languages. „How does the same words/letters/sentences “look” like in different languages?“ is my research question. I want to analyze the outcome of these visuals and experiment possible ways to use them. Another experiment can be asking to deaf people, if they notice a pattern in those visuals, for another point of view.
Hopefully, at the end of my prototype I can effectively demonstrate the unique acoustic properties of different languages by analyzing sounds with creative visualizations, and perhaps this will also provide an interesting way to explore and understand differences in linguistic expression.
Next Step:
My next step will be creating a visual in touch designer. I will look for some tutorials and experiment with which visual representation will suit best to my project.
To be able to user test my concept, I want to create a prototype with a certain technical functionality. Based on ease of use and personal prior knowledge, I decided to use Arduino to test the desired functions.
Required equipment
Arduino Uno
Battery pack
Breadboard
Buttons
LED lights
NeoPixel ring
Electret microphone
Piezo speaker
Wires
Resistors
To begin the prototyping, I watched this tutorial on YouTube: https://www.youtube.com/watch?v=bMs5J4bJOD0. It shows how to connect a microphone to LED lights for instant feedback in an Arduino setup. I thought this was a good way of starting the building of my prototype, because it contains some of the major functionalities of my concept.
I used the online web-based tool on tinkercad.com to build my Arduino setup and write the code. This is how it looks like so far:
In the next step, I will research how to add a speaker. This tutorial shows a setup with both a microphone and speaker: https://www.youtube.com/watch?v=nIDhkvomrcg. It will hopefully help me on the way. If I can access the required equipment, I will also attempt to build it in a physical format.
Further, I need to figure out how to generate random melodies and play them on the Arduino speaker. This website shows examples on how the melodies could sound: https://random-music-generators.onrender.com/melody. I ideally want the tempo parameter to be around 70, and the number of notes to be 4.
For visual purposes, I also want to build a non-technical prototype to demonstrate the indented design in the end. The two prototypes will complement each other, and optimally be merged together in the final prototype video.
Building upon my previous blogpost on immersive level design, this post also aims to explore the topic of immersion and interaction within VR, MR and AR a bit more by looking into different interface and interactivity solutions currently available, some previously mentioned already, some new, that may increase intuitiveness and engagement from the user. While also still covering AR and MR solutions, the main focus of this blog post will be on VR, as it is more in line with my future plans and planned research.
Tracking, controllers, recognition and other interaction methods
With a wide variety of different VR, AR and MR headsets and technologies comes an equally wide variety of different input devices, interfaces and ways to interact with the created virtual environment. In VR alone, there exists a seemingly endless amount of different controllers, with each headset developer putting their own spin on it.
Different headsets, different controllers – HTC Vive Pro, Meta Quest 3 Pro, PS Move and Valve Index Pro (left to right)
However, controllers like these are by no means the sole mean to interact with virtual environments anymore, as with advancements in tracking, movement- and voice recognition, a vast variety of different input and interface methods has been developed alongside the conventional controller-based inputs.
Hands, eyes, voice and haptic feedback
As previously mentioned, constant advancements in available computing power, frequent optimisations as well as new technologies make it possible to create virtual experiences that are more immersive than ever.
One such advancement lies in tracking and how the tracked movement and data get processed. While hand and gesture tracking has been a long-time staple especially in AR headsets, due to their inbuilt sensors and tracking, it has also become one in VR and MR applications. To give a differentiation between hand-tracking, controller-based-tracking and gesture-tracking, more commonly known as gesture recognition, which all may appear similar at a glance, is quite simple. Hand-tracking, as the name suggests, track the actual movement of the hand within the virtual space.
Ultraleap’s 3Di, a small tracking camera for handtracking, comes with its own intergrated interface
Different than controller-based-tracking, it frees the hands for interactions, without relying on buttons or other inputs. Controller-tracking, in comparison, also tracks the hands movement, but instead of doing so directly, it tracks the hand-held controllers instead. These controllers usually come with a wide variety of buttons, joysticks and other triggers, that can be programmed and used to interact with the environment and input information. Last but not least, gesture recognition interprets specific hand movements or gestures made by the user and reacts in specific ways, allowing interaction and enabling control over certain parts of the virtual space. It can be understood as a specific form of hand-tracking, as specific parts of the hand get tracked, though, in this case, the gesture made is usually more important than the position of the hand compared to the rest of the body.
Ultraleap Leap Motion Controller 2, a gesture tracking controller with a wide variety of applications
While it may seem now, that the main focus of current interaction lies in tracking movement of extremities, mainly the hands, this is not true. Eye tracking, for example, is a gaze-based form of interaction, that makes use of tracking monitors, that follow the user’s eye movements to enhance realism, allow interaction and render specific parts more or less, thus deepening immersion as needed, while devices like Amazon’s Alexa, Microsoft’s Cortana or Google’s own Voice Assistant can long sicne be used in VR and MR as well, to control and interact with the virtual environment using vocal commands. Using a combination of these different tracking technologies can make the user’s environment feel much more responsive.
But not only the users inputs can be enhanced by new technologies to create a more immersive experience. Using haptic feedback systems, spatial computing as well as hardware solutions, that creates the illusion of seemingly endless virtual spaces even within a very limited physical environment, the immersion of the user and responsiveness of the environment can be increased even further. While haptic feedback gloves have already been mentioned in a previous blog post, it is important to mention that haptic feedback, in the broader sense, is in no way limited to the tactile layer anymore.
teslasuit – a full body haptic feeback suit, that also tracks movement and can be used for motion capture
Haptic feedback suits, like the one shown above, can provide live-responses of the virtual environment to the user via vibrations. This can create the illusion of a physical component being present in the environment. Furthermore, spatial computing, especially in combination with multimedia-rooms, like the TU Graz‘ VR Cave, which can be used to more seamlessly merge physical and digital worlds, allowing physical objects to be tracked and influence the virtual, while also allowing virtual objects to interact with the real environment. Additional use of hardware, that allows for extended movement of the user through the virtual space even when limited by smaller real spaces, like an omnidirectional treadmil, can further blur the line between the virtual and the real.
Virtuix Omni One – a Omnidirectional treadmil developed for gaming in VR
Things to consider
When presented with all these different options to track the user’s input and the data they provide, it can be very easy to be overwhelmed or lose track of the reason of why these movements should be tracked and / or used in the first place, namely, to provide a smooth, fun and immersive experience, that allows a maximum amount of user interaction with a minimum amount of effort on the users side. To ensure that, there are a few important steps to consider when designing such an interaction.
Abstraction, intuitiveness, optimisation and sense of security
A good first step when approaching the design of user interaction and interfaces for the virtual is mimicking the real world and its interactions, to increase both intuitiveness for the user and clearness when providing feedback. By adapting the sense of realism or choosing a certain level of abstraction for the inputs and / or interfaces, they can be simplified and made to fit the desired experience without distracting the user.
Frequent user testing, followed by refinement and optimisation of the employed systems, can increase responsiveness, accessibility and create a sense of security in the user when confronted with the virtual environment. Furthermore, the higher the continuity of the created content, both in design and in experience, and the more seamless the transition between the physical and the virtual, the easier it is for the user to engage, which also boosts self-confidence and security.
All in all, by making use of the different technologies described above, while being aware of the challenges and chances they bring and optimising and adapting the desired experience according to the user’s needs, it is nowadays possible to create amazingly responsive environments already. It is, however, still important to be aware of the ever present limitations of current hardware, but with how rapidly technology and development keeps progressing, the next solution might already be around the corner.
Next steps:
Look further into different VR and MR solutions and their respective issues
Research essential tools for creating immersive virtual environments as well as different game engines and their advantages and disadvantages
Check methods of engagement and interaction within these digital environments