I got so frustrated with Unity’s increasing restrictions and issues, I’ve decided to temporarily abandon my current prototype and shift to learning Unreal Engine. My goal for this semester is to create an interactive prototype using Unreal, starting with a sci-fi scene to grasp the basics, then adding interactivity through a smartphone or game controller. This change aims to leverage Unreal’s capabilities for more complex and visually stunning interactive and XR experiences.
Exploring Meta’s Quest Integration for Unity: My Experience
My first steps started by experimenting with Meta’s Quest Integration for Unity. Here’s a quick rundown of my journey and the challenges I faced.
Easy Setup for Quick Prototyping
The initial setup was straightforward and user-friendly. Meta provides a lot of built-in functionality, making it ideal for quick prototyping. Getting a basic project up and running took minimal effort, thanks to the clear setup guides.
Complications Arise with Advanced Features
However, as I moved beyond the basics, things became complicated. The documentation from Meta is often outdated or incomplete, making it difficult to implement more advanced features.
What I Built
I created a terrain with high-definition textures and added functionalities for teleporting (locomotion) and interacting with objects. The process involved using Unity’s XR Interaction Toolkit, which, despite some confusing moments due to lacking documentation, helped achieve the desired interactions.
Performance Issues
A significant issue I encountered was performance. Despite the simplicity of my scene, I noticed major frame rate drops, falling below 38 fps. This highlighted the need for optimization, even in basic projects, to maintain a smooth VR experience.
Takeaways
While Meta’s Quest Integration offers a solid foundation for VR development, the lack of updated documentation can be a barrier. For those starting out, begin with simple projects and utilize community resources extensively. Despite the challenges, the potential for creating immersive VR experiences is exciting and worth the effort.
This blog post will be a bit different than my others. Instead of focusing on something I have specifically researched, I will just ramble on a bit and share my thoughts (also in preparation for the meeting with Birgt Bachler).
I know I should write more blog posts. Truth be told, it is not that I am not thinking about the master thesis/blog posts. Because I am – a lot. I could blame it on the tight schedule we have to face. But the reality is that it feels like I am avoiding it a little bit.
I sometimes worry that the „spark“ isn’t there for this topic/theme. That maybe I should change it? That I should find another, more „perfect“ topic. However, I also feel like these feelings may just come from feeling overwhelmed and confused. At this stage, I am not entirely sure how best to proceed.
My current topic or theme is VR therapy methods (for both physical and mental health problems). I originally chose this topic for a variety of reasons. For one, I have a personal connection to it – my brother used it and really enjoyed it after he had a car crash. In general, I feel like this healthcare topic is of great importance and value to society. It would be great if my work could help somebody. I also do find it very intriguing, and it interests me how this technology can be used.
I just find it hard to begin with this topic. I could build a Unity VR prototype, but about what exactly? Also, will it have any scientific basis? I worry that I might not be able to get into contact with a hospital. That I might not find anybody to test with. That it might be too much, too difficult.
I guess it’s normal to feel a little bit lost sometimes (it’s all part of the process after all) but I hope I can find a clearer structure soon, so I can really start giving it my all.
Welcome to my journey into the fascinating world of Virtual Reality (VR) and Mixed Reality (MR) game development! This semester, I’m diving deep into understanding how to create visually stunning and highly performant VR/MR games that can run smoothly on standalone headsets like the Oculus Quest 3. In this first blog entry, I’ll introduce you to my research topic, discuss the unique challenges developers face, and explore why VR games vary so much in visual fidelity and user experience.
Why Focus on Standalone VR/MR Headsets?
Standalone VR/MR headsets are revolutionizing the way we experience virtual environments. Unlike tethered headsets, which rely on powerful PCs or consoles, standalone devices like the Oculus Quest 2 offer untethered freedom and accessibility. This opens up VR to a broader audience, making it crucial for developers to optimize their games for these platforms. However, creating games that are both beautiful and performant on standalone headsets presents unique challenges.
The Challenge of Balancing Performance and Visual Fidelity:
VR/MR games differ significantly in visual fidelity and user experience due to a variety of factors. Here are some of the key reasons:
Hardware Limitations: Standalone headsets have limited processing power and memory compared to high-end PCs. Developers must optimize their games to run smoothly within these constraints, which often means sacrificing some visual detail to maintain performance.
Rendering Techniques: Advanced rendering techniques used in PC-based VR games, such as real-time ray tracing, are often too resource-intensive for standalone devices. Developers must rely on more efficient methods like Level of Detail (LOD) and texture compression to achieve good performance.
Optimization Strategies: Effective optimization involves a mix of techniques, including occlusion culling (not rendering objects that are out of view), efficient shaders, and baked lighting (pre-computed lighting effects). These strategies help maintain high frame rates, which are crucial for a smooth VR experience.
Art Style Choices: The choice of art style can significantly impact both performance and visual appeal. Stylized graphics with simplified textures and models can look great while being less demanding on the hardware, whereas hyper-realistic graphics require more resources and careful optimization.
Why Do VR Games Differ So Much in Visual Fidelity and User Experience?
Red Matter 2, a rather optimised game with high fidelity graphicsPavlov Shack. The Quest 3 Standalone Port looks significantly worse than its PC counterpart.
The variance in visual fidelity and user experience across VR games can be attributed to several factors:
Developer Priorities: Some developers prioritize visual fidelity, creating stunning environments but potentially compromising on performance. Others focus on smooth performance, resulting in simpler visuals.
Budget and Resources: Indie developers might not have access to the same level of resources and optimization tools as large studios, leading to differences in game quality.
Target Audience: Games aimed at a broader audience might prioritize accessibility and comfort, opting for less intensive graphics to ensure smooth gameplay for everyone.
Technological Evolution: As standalone VR technology evolves, newer games can leverage more advanced features and optimizations, leading to disparities between older and newer titles.
Conclusion:
As I embark on this research project, my goal is to uncover and document the best practices for balancing performance and aesthetics in standalone VR/MR games. Through a combination of literature review, case studies, and hands-on development, I aim to create a prototype game that demonstrates these principles in action. Stay tuned as I explore the technical and creative aspects of this exciting field, and share insights and discoveries along the way.
Thank you for joining me on this journey. In the next entry, I’ll delve into the initial research phase, exploring existing optimization techniques and visual enhancement strategies used in the industry. Let’s make VR/MR games not only playable but truly breathtaking!
Building upon my previous blogpost on immersive level design, this post also aims to explore the topic of immersion and interaction within VR, MR and AR a bit more by looking into different interface and interactivity solutions currently available, some previously mentioned already, some new, that may increase intuitiveness and engagement from the user. While also still covering AR and MR solutions, the main focus of this blog post will be on VR, as it is more in line with my future plans and planned research.
Tracking, controllers, recognition and other interaction methods
With a wide variety of different VR, AR and MR headsets and technologies comes an equally wide variety of different input devices, interfaces and ways to interact with the created virtual environment. In VR alone, there exists a seemingly endless amount of different controllers, with each headset developer putting their own spin on it.
Different headsets, different controllers – HTC Vive Pro, Meta Quest 3 Pro, PS Move and Valve Index Pro (left to right)
However, controllers like these are by no means the sole mean to interact with virtual environments anymore, as with advancements in tracking, movement- and voice recognition, a vast variety of different input and interface methods has been developed alongside the conventional controller-based inputs.
Hands, eyes, voice and haptic feedback
As previously mentioned, constant advancements in available computing power, frequent optimisations as well as new technologies make it possible to create virtual experiences that are more immersive than ever.
One such advancement lies in tracking and how the tracked movement and data get processed. While hand and gesture tracking has been a long-time staple especially in AR headsets, due to their inbuilt sensors and tracking, it has also become one in VR and MR applications. To give a differentiation between hand-tracking, controller-based-tracking and gesture-tracking, more commonly known as gesture recognition, which all may appear similar at a glance, is quite simple. Hand-tracking, as the name suggests, track the actual movement of the hand within the virtual space.
Ultraleap’s 3Di, a small tracking camera for handtracking, comes with its own intergrated interface
Different than controller-based-tracking, it frees the hands for interactions, without relying on buttons or other inputs. Controller-tracking, in comparison, also tracks the hands movement, but instead of doing so directly, it tracks the hand-held controllers instead. These controllers usually come with a wide variety of buttons, joysticks and other triggers, that can be programmed and used to interact with the environment and input information. Last but not least, gesture recognition interprets specific hand movements or gestures made by the user and reacts in specific ways, allowing interaction and enabling control over certain parts of the virtual space. It can be understood as a specific form of hand-tracking, as specific parts of the hand get tracked, though, in this case, the gesture made is usually more important than the position of the hand compared to the rest of the body.
Ultraleap Leap Motion Controller 2, a gesture tracking controller with a wide variety of applications
While it may seem now, that the main focus of current interaction lies in tracking movement of extremities, mainly the hands, this is not true. Eye tracking, for example, is a gaze-based form of interaction, that makes use of tracking monitors, that follow the user’s eye movements to enhance realism, allow interaction and render specific parts more or less, thus deepening immersion as needed, while devices like Amazon’s Alexa, Microsoft’s Cortana or Google’s own Voice Assistant can long sicne be used in VR and MR as well, to control and interact with the virtual environment using vocal commands. Using a combination of these different tracking technologies can make the user’s environment feel much more responsive.
But not only the users inputs can be enhanced by new technologies to create a more immersive experience. Using haptic feedback systems, spatial computing as well as hardware solutions, that creates the illusion of seemingly endless virtual spaces even within a very limited physical environment, the immersion of the user and responsiveness of the environment can be increased even further. While haptic feedback gloves have already been mentioned in a previous blog post, it is important to mention that haptic feedback, in the broader sense, is in no way limited to the tactile layer anymore.
teslasuit – a full body haptic feeback suit, that also tracks movement and can be used for motion capture
Haptic feedback suits, like the one shown above, can provide live-responses of the virtual environment to the user via vibrations. This can create the illusion of a physical component being present in the environment. Furthermore, spatial computing, especially in combination with multimedia-rooms, like the TU Graz‘ VR Cave, which can be used to more seamlessly merge physical and digital worlds, allowing physical objects to be tracked and influence the virtual, while also allowing virtual objects to interact with the real environment. Additional use of hardware, that allows for extended movement of the user through the virtual space even when limited by smaller real spaces, like an omnidirectional treadmil, can further blur the line between the virtual and the real.
Virtuix Omni One – a Omnidirectional treadmil developed for gaming in VR
Things to consider
When presented with all these different options to track the user’s input and the data they provide, it can be very easy to be overwhelmed or lose track of the reason of why these movements should be tracked and / or used in the first place, namely, to provide a smooth, fun and immersive experience, that allows a maximum amount of user interaction with a minimum amount of effort on the users side. To ensure that, there are a few important steps to consider when designing such an interaction.
Abstraction, intuitiveness, optimisation and sense of security
A good first step when approaching the design of user interaction and interfaces for the virtual is mimicking the real world and its interactions, to increase both intuitiveness for the user and clearness when providing feedback. By adapting the sense of realism or choosing a certain level of abstraction for the inputs and / or interfaces, they can be simplified and made to fit the desired experience without distracting the user.
Frequent user testing, followed by refinement and optimisation of the employed systems, can increase responsiveness, accessibility and create a sense of security in the user when confronted with the virtual environment. Furthermore, the higher the continuity of the created content, both in design and in experience, and the more seamless the transition between the physical and the virtual, the easier it is for the user to engage, which also boosts self-confidence and security.
All in all, by making use of the different technologies described above, while being aware of the challenges and chances they bring and optimising and adapting the desired experience according to the user’s needs, it is nowadays possible to create amazingly responsive environments already. It is, however, still important to be aware of the ever present limitations of current hardware, but with how rapidly technology and development keeps progressing, the next solution might already be around the corner.
Next steps:
Look further into different VR and MR solutions and their respective issues
Research essential tools for creating immersive virtual environments as well as different game engines and their advantages and disadvantages
Check methods of engagement and interaction within these digital environments
To better understand, how to design immersive and engaging experiences in VR, AR and MR, and building on the previous post about issues and how to solve them, it is necessary to understand the importance of world building and level design on the overall experience. As such, this blog-post aims to provide a short overview on different considerations to be aware of, when dealing with the challenge of creating an immersive experience for VR, AR and MR. Same as with my previous post, however, I would like to dive even deeper into immersive world building and continue my research, so this entry is only a momentary summary and my research into this topic – especially since I plan to eventually read all of Richard A. Bartle’s „Designing Virtual Worlds“ – is still incomplete.
How to create engaging, immersive worlds using level design
Understanding the importance of space and spatial awareness
When designing VR and AR environments, it is crucial to remember that, different from other virtual experiences in front of a monitor or screen, such as games, videos or the like, in VR and AR the user moves his own body and his virtual avatar through the real and the virtual world. This may seem like an obvious and maybe not so important difference at first, but it has some implications, as the user’s physical space and interactions within it need to also be considered when designing. Furthermore, the user’s real life comfort zone, field of view as well as the virtual viewable distance need to be considered as well to understand how to engage with the space.
Using realism, stylisation and interactivity to boost immersion
Utilising the capabilities of modern graphics cards as well as making use of optimisation and / or stylisation options when designing virtual environments for VR and MR, it becomes possible to create both realistic and semi-realistic experiences, that still keep the user immersed. No matter the style, however, maximal attention should be given to light, sound and eventual haptic or other tactile feedbacks, as only a wholistic experience allows the user to perceive the created environment as real and thus allow for maximum immersion. In this regard, it is also important to be aware of how the user interacts and influences the created world. Integrating easy, intuitive and natural-feeling interactions, that take advantage of all the capabilities of VR and AR input devices, such as controller movement and position tracking, gesture tracking or even full-on inside-out tracking can influence how the user interacts with the virtual world.
Design intuitive spaces and provide clear clues
Understanding the perception of different spaces and how it may influence the user, be it in an AR environment, in MR or even completely virtual in VR, is important, as all these environments, compared to our usual computer screen, are perceived as 3-dimensional and thus need to be approached accordingly. By providing clear cues for navigating the virtual space, it becomes easy and safe to explore. To counter the limitations of a user’s real, phyiscal environment vs. its virtual counterpart, a wide variety of movement methods can be explored, such as teleportation, locomation or even simple walking, should the space allow it. This allows for a different approach to space than usual, as it further opens the environment up for exploration.
It is, however, important to always be aware and understand the limitations of such an experience, both in hardware – e.g., tracking and sensing capabilities and physical boundaries – and in the user’s own perception – e.g., how much they can handle without being overwhelmed by the stimuli and cues. A good example of such an overwhelming experience, that is sure to cause fatigue in the long term, is the short-film „Hyper-Reality“ by Keiichi Matsuda.
Hyper-Reality by Keiichi Matsuda
Be aware of storytelling and how it is affected by 3D
Last but not least, after exploration of both the virtual and the physical space, depending on whether the created experience has been in AR, VR or MR, has been opened to the user, it is paramount to understand, how storytelling may be affected by the 3D environment. A good approach to this would be to learn from the gaming industry, as it has created explorable semi 3D and 3D environments for a long time now. Of course, with AR, VR and MR, these spaces – these levels – get extended even further, with more feedback being added, more information being perceived, and more interactions being possible, so careful consideration should be given on which tools may be applicable and which may be unsuited.
The end goal, in any case, should be to consider innovative ways to tell a story, that seamlessly incorporates its narrative elements within the created user-experience. To make sure that the experience remains as complete as possible while offering free exploration remains the challenge when creating such a virtual space.
Next steps:
Look into interfaces unique to VR, AR and MR
Check how interaction with VR, AR and MR may work
Research how to boost engagement and create intuitive interactions
Continuing the notion of my previous blogpost of understanding VR, AR, MR and XR along with their strengths and weaknesses, this post aims to also provide insight into the different problems one may be confronted with when designing for or engaging with these technologies on the regular and how to best solve them. At this point, it is necessary to mention already, that, while this blogpost will provide a broad overview about the different challenges one may face along with some possible solutions, it is in no way a complete guide yet, as my research into this topic is still ongoing and thus, incomplete.
Discomfort, accessibility and other challenges
To provide a widely accessible, engaging and also comfortable and immersive experience it is important to address a multitude of issues that the user may experience, when engaging with VR, AR or MR. By being aware of said issues as well as their implications to the experience and how to solve them, a much more complex and immersive experience can be created.
Motion sickness and other discomforts
Extended exposure to virtual environments, be it via VR, AR or MR, can lead to a wide variety of discomforts, when not handled and prepared for accordingly.
Simulator sickness, which is a special type of motion sickness, occurs, when our visual and vestibular system receive conflicting information. This can happen, for example, when the visual display or overlayed displayed object suggest rapid motion, but the user’s body senses no corresponding movement. The resulting mismatch in information can lead to disorientation, nausea and discomfort.
Furthermore, extended exposure to close-up virtual objects or displays can leade to both eye strain and general fatigue, as the eye has to constantly strain to focus. This strain may be even increased, if the eye has to constantly focus on rapidly moving objects or re-focus due to overlays, interfaces or similar displayed objects moving back and forth or rapidly in and out of focus.
Last but not least, when switching between AR and VR – which may mainly occur when working with MR solutions – ,but also when switching between AR or VR and the real world, user’s may experience a short bout of spatial disorientation. This may happen, when the transition between the virtual and the real environment is either to abrupt or not enough spatial clues are present in either the virtual or the real environment for the user to orient themselves. This can then cause disorientation, dizziness or short emotional distress.
Accessibility and impairments
Another thing to consider when dealing with artificial environments is accessibility for people with certain sensibilities or impairments. To create an inclusive and immersive experience, a number of challenges have to be considered.
As VR and AR experiences currently heavily rely on visual elements and their space cues, it is very easy to exclude users with visual impairments. Whether it be complete blindness, colour blindness or other similar disabilities, a number of things needs to be considered to make the experience as inclusive as possible. In a similar vain, people with auditory, cognitive or motor impairments may have trouble interacting with the created environment as well. Be it challenging motion-based inputs, complex interfaces, information overload or lack of adaptability, users may find an interaction to be needlessly challenging for them and thus feel excluded from the experience, unless ample consideration is given. This, of course, is also true for users with special sensory sensitivities, such as to bright lights, loud sounds or intense vibrations.
As such, accessibility needs to always be considered when designing in and for such environments.
Possible solutions and their integration
These considerations may seem challenging and / or limiting at first, however, they can also be understood as a chance to create a more immersive, comfortable and holistic experience for the user. When designing virtual environments, it is important to constantly question oneself, whether or not ample consideration for the described challenges has been given and remember some of the solutions described below.
Comfort and customisation
To increase the user’s long-time immersion, and comfort while doing so, a number of small changes and customisation choices can already make a huge difference.
One such change is in how the software is structured in the first place. To give an example, by including different options to adjust volume or brightness of the headset, but also providing different levels of haptic feedback, the whole experience can be customised to fit people of different sensibilities. In the same vein, allowing for the adjustment of the displayed colours or sounds or adding audio descriptive elements can also provide more inclusivity for people with auditory or visual impairments. Following this approach, the chance of motion sickness occuring can also be decreased, by allowing the user to choose settings such as movement speed, field of view and add comfort vignettes, that allow the user to find settings, that minimise their overall discomfort when being exposed to VR or MR environments for an extended amount of time.
HaptX haptic feedback gloves
Sony’s accessability controller, compatible with the new PS5 and PS VR
Another way to increase comfort would be by making adjustments to the hardware and how the user may engage with it. Using headsets with reduced latency or the currently trending inside-out tracking, for example, can both reduce motion sickness by boosting the sense of being present within the current environment and also reduce the weight and thus discomfort the user is exposed to long-term while interacting with the digital environment, be it AR, VR or MR. Similarly, combining modern headsets with custom accessibility controllers or even haptic feedback gloves, a variety of different modes of interaction can be provided, that could potentially lead to a more inclusive environment. A few solutions hereby would be tracking gestures of different body parts and allowing for gesture or voice controls as alternative input methods for people unable to use more conventional controllers or adjusting the haptic feedback to provide more or less information depending on the user’s other sensory capabilities.
Control and adaptability
To conclude the described solutions, providing the user with a maximum amount of control, thus allowing them to adapt the experience to fit their specific needs, can greatly boost both inclusivity and immersion. This may seem like an obvious conclusion, at a glance, but the fact that there is still new technologies and ways being developed to create an even better immersive experience for the wide variety of user needs hints at the opposite. As such, it is paramount to always consider your targeted end user while developing and frequently question yourself not only how to best experience a created environment, but also the challenges some users may face when doing so and to react accordingly.
Next steps:
Look into immersive environments and how to create then
Check methods of engagement and interaction within these digital environments
To explore one of my possible research topics further, this blogpost will look into the strengths and weaknesses of virtual, augmented, mixed and extended reality systems as well as their exact definitions and current technological trends, to gain a better understanding of which system / technology to use in future endeavors when going for different experiences.
VR vs AR vs XR/MR – a comparison
To better understand the difference and similarities between the different technologies, it is first important to understand their definitions as well as their strengths and weaknesses. For this reason, a short overview will be provided.
Virtual Reality (VR)
Virtual reality creates a computer-generated environment that immerses the user in a completely digital, three-dimensional space, often experienced through specialized VR headsets, providing a sense of presence and interaction within the virtual world. The user, thereby, is completely seperated from the real world and any stimuli they experience is completely computer generated. As a result, it usually allows for a much deeper immersion than the other solutions researched in this blogpost, but has its own strengths and weaknesses to be aware of.
Strengths:
Offers a completely immersive experience, perfect for training, gaming and simulations
Can easily create spaces, that are not accessible normally and / or provide space where normally there would be none
Can be used in healthcare, especially in therapeutic applications, to provide immersive therapy, exposure therapy, pain management and rehabilitation
Weaknesses:
Isolation from real world may cause emotional distress (solitude)
Needs special equipment, that may be costly and / or not readily available
Augmented Reality (AR)
Augmented reality overlays digitally created content onto our real world, enhancing the user’s perception of their surroundings by integrating computer-generated information such as images, text, or 3D models into the real-time environment. These are typically viewed through devices like AR glasses, tablets or smartphone screens, though in recent years, more applications have surfaced. Especially in the automotive industry, heads-up displays also make use of AR to display necessary information directly to the driver while projecting said information directly onto the windshield.
Strengths:
Real world information overlay, that relays information in real-time and provides additional input
Allows for hands-free interaction, which allows the user to engage the digital content while also staying aware in the real world
Useful for product visualisation and trying out products before making a buying decision
Weaknesses:
Limited field of view, especially on smartphone screens or tablets
Mobile dependency means less computing power, limiting display performance and causes need for optimisation
Mixed Reality (MR)
Mixed reality combines elements of both virtual and augmented reality, allowing digital and physical objects to coexist and interact in real-time. It seamlessly blends the virtual and real worlds, and allows switching between them, enabling users to engage with both types of content simultaneously. While this, of course, can provide difficult to understand with at first, it also allows for a much deeper influence of the user’s perceived reality as a result.
Strengths:
A high level of versatility, as it combines both VR and AR, it allows for a broader range of experiences to be created
Enables both in-room and virtual connection, communication and collaboration
Can, same as VR, be used in a wide variety of industries for training purposes, while also allowing for direct testing in the real world in AR
Weaknesses:
Different technologies and their implementation can cause performance and optimisation issues, posing technical difficulties
Cost of adaption currently still very high, especially when compared to pure VR or AR solutions
Extended Reality (XR)
Finally, extended reality is an umbrella term, encompassing VR, AR, and MR. It referes to the spectrum of immersive technologies, that extend, enhance, or blend reality. XR is a comprehensive term covering the entire range of experiences, from completely virtual to fully augmented. It aims to offer a holistic approach to immersive technologies. As such, it comes with all of the previously strengths and can, if used correctly, mitigate some of their weaknesses. The opposite, when used incorrectly, is also true, however.
Current trends and technologies
While of course, the market around VR, AR, MR and other technological solutions like that is constantly evolving, it is still important to understand the general direction of these developments in order to better understand them and work with them. As such, some of the most important ones will be listed here.
Stand-alone, wireless VR, AR and MR headsets without the need for external trackers:
HTC recently presented their new inside-out tracker, that would allow for inbuilt-tracking on a multitude of existing 3rd party headsets
While some of the previous generation’s systems still require for external trackers / tracking stations to be set up or for the headset to be connected to a PC via cable in order to ensure a smooth and immersive experience, current trends have since begun to diverge from that. The current trend seems to be to offer stand-alone, wireless VR, AR and MR solutions without the need for a seperate computer or similar unit for calculations, that also offers tracking via inbuilt sensors instead of external stations. The tracker shown in the picture above, for example allows for complete controllerless tracking of a person’s arms, legs, torso and even head, all by attaching it to the limb in question and up to 8 trackers can currently be used together to provide a smooth and easy experience.
Hybrid systems:
Both the Apple Vision Pro and the Meta Quest 3 offer hybrid solutions when it comes to VR and AR.
While there are still certainly solutions, that focus on either VR or AR in particular, MR seems to be the much more common trend, with passthrough headsets becoming more and more widespread. This is especially obvious when looking at both Apple’s recently released Vision Pro, but also Meta’s Quest 3.
Wider application in everyday life:
When looking at the adaption rate of VR, AR and MR solutions, it quickly becomes apparent that the possible applications for it have skyrocketed. While originally more of a niche development, nowadays a wide variety of experiences are offered, ranging from immersive nature documentaries, sport events, games, movies and more. Furthermore, with both the increasing variety of headsets to choose from and their increasing calculation power comes a lower price – except for the apple solution, of course – with an also increased quality in the experience, making the technology much more accessible in everyday life.
Next steps:
Look further into different XR solutions and their respective issues
Research essential tools for creating immersive virtual environments
Check methods of engagement and interaction within these digital environments
As we close the chapter on a semester filled with extensive learning about Extended Reality (XR), it’s an opportune moment to reflect on the ground we’ve covered and to anticipate the exciting journey ahead. Our exploration has spanned a diverse range of topics, each offering a unique perspective on the rapidly evolving world of XR.
XR 1// Evolving Interaction and Experiences in XR: I began this journey by delving into the dynamic world of XR interactions, examining how they’ve transformed over time and what the future holds.
XR 2 // The Evolution and Current Landscape of Extended Reality: This post offered a comprehensive view of XR’s growth trajectory and its current state, setting the stage for this subsequent deep dives.
XR 3 // VR Interactions: Controller vs Body Tracking: A detailed comparison between controller-based and body-tracking interfaces in VR, highlighting their unique advantages and potential applications.
XR 4 // Enhancing Virtual Reality: The Power of Haptic Feedback: I explored the sensory dimension of VR, focusing on how haptic feedback intensifies the immersive experience.
XR 5 // Interesting Case Study: HoloStudio UI and Interaction Design by Microsoft: This case study provided insights into practical applications of XR, emphasizing user interface and interaction design.
XR 6 // UX in Mixed Reality: I discussed the intricacies of user experience design in the mixed reality spectrum, emphasizing its importance in creating engaging and intuitive applications.
XR 7 // Dive into Vision OS Guidelines: This post was dedicated to understanding the best practices and guidelines in designing for XR platforms, particularly the Vision OS.
XR 8 // Beyond Gaming: XR in the Entertainment Industry: I expanded our view to see how XR is revolutionizing the broader entertainment sector, beyond just gaming.
XR 9 // XR in the Military Complex: My exploration concluded with an examination of XR’s applications in military training and strategy, showcasing its diverse utility.
What now?
Last semester’s curriculum primarily revolved around theoretical aspects. For the upcoming semester, I aim to adopt a more practical approach towards the subject. This will involve actively engaging with various XR applications and models for comparative analysis and learning. Additionally, I plan to implement the concepts learned in my own projects and create case studies for them.
In this semester’s research project, I explored various facets of Extended Reality (XR). Today, let’s delve into a lesser-discussed but significant aspect: the role of XR in the military defense complex. Notably, the military sector is one of the leading investors and developers in this technology.
Training with Mixed Reality
Mixed Reality (MR), blending elements of Augmented Reality (AR) and Virtual Reality (VR), has revolutionized military training. Historically, MR applications like the Swiss tank-driving simulator from the 1970s laid the groundwork for today’s sophisticated systems.
One prominent example is the U.S. Army’s Synthetic Training Environment (STE). This advanced system merges virtual and augmented reality to simulate a wide range of scenarios, from urban warfare to counterinsurgency operations, thus providing immersive training experiences. The STE is a comprehensive platform integrating virtual, live, and collective training elements, designed to be portable and cost-effective. It includes the Reconfigurable Virtual Collective Trainer (RVCT), which offers training for various military vehicles and infantry units.
Mixed Reality in Combat
MR’s role extends beyond training to actual combat operations. It significantly enhances situational awareness by providing soldiers with real-time information through contextually relevant visualizations. This includes displaying crucial data like maps, navigation, and enemy locations seamlessly.
Soldiers training with Microsoft’s IVAS
A key development in this area is the Integrated Visual Augmentation System (IVAS), a collaborative effort between Microsoft and the U.S. Army. Based on Microsoft’s HoloLens technology, IVAS delivers advanced capabilities such as rapid target acquisition, enhanced situational awareness, and improved navigational tools. It integrates various technologies like thermal imagery, sensors, GPS, and night vision to give soldiers a comprehensive view of the battlefield. This technology is not only pivotal for training but also holds immense potential for real-world combat operations, allowing soldiers to plan and execute missions with enhanced precision and information.
Support Functions
MR’s applications in the military also extend to support functions. It can transform maintenance and repair processes by overlaying relevant instructions onto real-world objects, aiding technicians and mechanics in performing tasks more efficiently.
In medical support and telemedicine, MR can overlay digital content such as instructions and patient data, facilitating accurate and efficient medical procedures in challenging environments.
Conclusion
MR technology is a game-changer in military applications, enhancing various aspects of operations. While it offers immense benefits in training, situational awareness, and support functions, there are challenges to consider. For instance, overreliance on technology can lead to operational inefficiencies if not managed properly. The concept of „HUD-Cripple,“ prevalent among Navy aviators, highlights the risk of becoming overly dependent on technological aids to the extent that performance without them is significantly impaired.
Moreover, the use of MR in combat situations introduces ethical dilemmas around warfare conduct. The enhanced capabilities provided by MR could lead to debates about the fairness and humanitarian implications of using such advanced technology in conflicts. This necessitates a balance between technological advancement and adherence to international warfare norms and ethics.
The responsibility of XR designers and developers in this context is profound. They must not only focus on the technical and functional aspects of MR systems but also consider their broader societal implications. This includes ensuring that the technology is used responsibly and in accordance with ethical standards. Designers and developers need to collaborate closely with military experts, ethicists, and psychologists to understand the full spectrum of impacts their creations might have. Furthermore, there should be ongoing assessment and adjustment of these technologies to align with evolving ethical standards and societal values.
As we venture further into this technologically advanced era, the responsibility of XR professionals extends beyond innovation, encompassing the ethical stewardship of their creations in the complex domain of military applications.