Information is vast, inspiration is vast, creativity is vast but our time is short
An article published by the Scientific American addresses the popular myth that humans only use 10 percent of their brains. This misconception has been widely debunked by neuroscientists. The article explains that while it’s true that not all parts of the brain are active at the same time, the idea that 90 percent of the brain is unused is simply untrue.
Neuroimaging techniques such as PET scans and fMRI have shown that even during simple tasks, much more than 10 percent of the brain is active. Different regions of the brain are responsible for various functions, and the brain operates as a highly integrated network.
Overall, the notion that we use only a small fraction of our brains is a myth, and the article highlights the importance of dispelling such misconceptions to promote accurate understanding of brain function.1
Does performance pressure, stress and a lot of tasks at the same time improve creativity?
According to the findings of the Harvard Business Review, the simple answer is „no.“ When creativity is put on a time crunch, it typically suffers. While time pressure might push individuals to work harder and accomplish more, and it might even give them a sense of increased creativity, it tends to hinder creative thinking overall. However, the brief response doesn’t cover everything. Let’s jump into what time pressure entails, how it affects individuals in a work setting, and the various methods to handle it in order to boost creativity.
Our research suggests that as individuals experience greater time pressure on a given day, their likelihood of thinking creatively decreases. Interestingly, many people seem to be unaware of this trend. Participants in our study tended to rate themselves as more creative on days with high time pressure, despite evidence to the contrary in their daily diaries. These self-assessments didn’t align with the actual decrease in creative thinking observed over time as pressure mounted.
Furthermore, the decline in creative thinking was most noticeable during periods of extreme time pressure. Participants were asked to rate their perceived time pressure daily on a scale from one to seven, with seven representing the highest pressure level. On days when they rated their pressure as a seven, individuals were 45% less likely to engage in creative thinking compared to days with lower pressure ratings.2
Amabile, Teresa. The Time-Pressure/Creativity Matrix. 2014. Harvard Business Review, Aug. 2014, hbr.org/2002/08/creativity-under-the-gun.
Overwork leads to a decrease in creativity
Working excessively can stifle your creativity because even when your brain appears to be inactive, it’s actually quite active. Giving yourself time to relax allows your brain’s Default Network to become more active, fostering connections between different brain regions that don’t typically interact. This often leads to moments of inspiration and heightened creativity.3
»Taking time to revamp, refresh, and relax is just as important to success as a motivated work ethic.«
I know that I am mixing APA and MLA styles, but MLA in-text looks horrible and APA is very complicated, when it comes to the source index. I took the best of both worlds.↩︎
As my final entry, I wanted to continue looking into immersion and what may cause it. As an avid anime and video game fan myself, I have always been fascinated by how some of my favourite pieces of media managed to catch the audience and deepen immersion by not only having a compelling story, but also mange to make use of various different auditory and visual storytelling elements to underline it. This lead to this blog-post, where I want to explore some of my favourite recent examples and explore some ideas, on what makes them work. By looking into these examples, analysing them and learning from their respective examples, I hope to also gain insight into how to make my own work more immersive. Also, while I will try to keep my entry as spoiler-free as possible, there may be some spoilers for the analysed media, you have been warned!
Case 1 – Sousou no Frieren: A slow burner with a twist
Sousou no Frieren – translation: Frieren at the funeral, official English title: Frieren: Beyond the journey’s end – is a currently airing fantasy anime, that deals with a very simple premise: What happens to the heroes, after the big bad boss at the end of every fantasy story is slain and the world is saved? What comes afterwards, when the journey that should define a lifetime, is over?
In Sousou no Frieren, that is where our story begins, with the hero’s party, consisting of the noble human hero Himmel, the ever-unfazed dwarven warrior Eisen, the corrupt human priest Heiter and the stoic, century old elven mage and titular character, Frieren, having just slain the Demon Lord, thus bringing peace to the lands. And, as the title suggests, Frieren and her emotional journey from this point, is also the focus going forward.
Because, in the first episode already, as the heroes party is celebrating their victory in the capital, they manage to catch a very rare sight: A meteor shower, that only appears every 50 years. It is at that time, that they all agree to see it together next time too, though in a place with a better view, because Frieren remarks, how she knows such a place and that they should just meet. Afterwards, the party splits up, and our elven mage embarks on another journey. It should be 50 years before she sees her companions again, a time, that passes in the blink of an eye, barely even 2 minutes pass in the episode, and it is in that moment, that you, as a viewer begin to realise, how different Frieren perceives time and its meaning. When meeting her friends again, the two humans, Himmel and Heiter, have become old men, barely recognisable from before. However, they all keep their promise and watch the meteor shower together once more. Shortly after, Himmel dies, and it is at his funeral, where the usually stoic and composed Frieren breaks down, crying, wondering, why she feels so sad about someone, that „she only spent a 10 year journey with“, which to her, barely is a blip in her long, long life, but to her human companions, is one of the most important times of their lifes. She regrets, not spending more time with them, connecting more with them, and getting to know them better. It is at this moment, that Frieren decides to retrace her former parties steps and go on a journey of self-discovery, to deal with the question, on what it means, to form a connection, when it will eventually be lost anyway, how precious time together can be and how to deal with loss and her emotions in general.
Duality between Action and Slice of Life
During this journey, Frieren will come to experience the change, that her actions have brought, take on an apprentice, face all kinds of hardships, old and new foes and other battles, and generally grow as a person.
Screenshots from the first 6 episodes of the anime: Sousou no Frieren’s backdrops (top) and frequent wide-angle have an almost pittoresque quality to them, inviting the viewer to stop and gaze, slowing down the pace, without appearing boring The fight scenes (bottom), however, are usually much more dynamic, the art style changes, it features clear focus points, harsher lines, dynamic movement.
Now, it is important to note, however, that, while there are some amazingly animated action secquences, Sousou no Frieren is not an action anime at all. If anything, it would be more appropriate to call it a fantasy slice of life show, due to how Frieren’s journey is more about her everyday life and how she finds her way in a world, where elves are increasingly rare and thus, nearly everyone she meets, will die before her. This duality in moods, and how to approach them from a storytelling perspective, is by no means an easy feat, and while the aforementioned change in artstyle certainly sells the change in tempo and engage the viewer, it is without a doubt also the amazing soundtrack, composed by Evan Call, that helps the change along in a very unique way.
American composer Evan Call composed most of the Original Soundtrack of Sousou no Frieren, and already released part of it as a pre-release on Youtube. The full soundtrack is set to release in April
To understand how Call achieves that, it is first important to know, how he approached the Soundtrack in general. It is remarkable to see, how, to fit the music into the world, he actually makes use of medieval instruments, as well as choirs, and combines them with more modern composition techniques to achieve the tempo and response that is appropriate in any given scene. To make it possible, to seamlessly switch between slow-paced and fast-paced sequences, between slice-of-life and action, instruments, and sometimes even whole melodies get used in different paced pieces, while only slightly changing the tact or rythm, adding or substracting drums, but still following the same leitmotifs. A good example of this, when comparing different songs from his pre-released soundtrack, were the following.
Time Flows Ever Onward – a slower piece
Fear Brought Me This Far – an uplifting battle theme, that ends in a crescendo
The similarity between the two pieces, when experienced while watching the anime, is palpable and helps give the experience a more holistic feel. It allows for a change of pace, without one or the other feeling out of place. Of course, it would be possible, to further analyse these two pieces, as well as the rest of the soundtrack in depth, but this is not the goal of this blogpost. Rather, it is to understand, how sometimes seemingly subtle things can influence an experience.
In that regard, it is also very interesting, to see, how the anime plays with the passage of time, as it follows Frieren’s perception of it. As such, sometimes, within a few minutes, years can pass in the blink of an eye, while in others, time seems to almost grind to a standstil. Within a few frame and scenery changes, years can pass, while in others, especially during battles or important dialogues, the passage of time can be experienced almost in real time. It plays with the topic, of how even to someone as epheral and timeless as Frieren, time can pass in the blink of an eye, while still, if you’re aware of them, small moments still can have meaning, even to a life as long as Frieren’s. It is almost melancholic sometimes, to see what an impact memories and what may seem like such small gestures or occasions at the time can have. And of course, Evan Call also provides the audience with more than one other piece, that capture this sometimes epheral and timeless, yet oftentimes even melancholic quality quite well.
For 1000 Years – a slow piece, with an almost epheral quality to it, that is a play on the passage of time
It is very hard to put into written words, what it feels like to experience these pieces while watching, as such it is probably best to experience them oneself. Hopefully, by being a little bit more aware of them now, the experience can be made that much more enjoyable, however.
Master and apprentice
One last thing to add, that deeply fascinated me when watching the anime, was the relationship between Frieren and her apprentice, Fern, as they display both a remarkable similarity in some aspects, despite one being human and the other being an elf, while being completely different in others. It is funny to see their interactions and how they experience the same things sometimes differently and sometimes the same, especially when it comes, again, to the passage of time. In this sense, Fern grounds Frieren in the here and now, and might be the first proper human connection she has made since the passing of her party. To see, how that changes Frieren and to generally see, how they influence each other, how they copy each other’s mannerisms and small little details in their behaviours is a joy to watch. In this regard, the most fun little detail, that shows, how deep of an impact Frieren probably had on Ferns life, has to be their respective themes, both of which follow the exact same melody and leitmotif, with only small changes to singular notes, that still somehow manage to show both of their characters perfectly. Sadly, so far, only Fern’s theme has been released, but, as mentioned before, these songs are probably best experienced live and in person, while watching the anime, anyway.
Zoltraak – a name, that describes the offensive spell both Frieren and Fern use during their journey, that has still, due to stroy progression, become Fern’s theme, with Frieren’s being very similar
Case 2 – Nier: Automata: The ultimate humanist fable?
While Sousou no Frieren follows a more conventional Fantasy setting, albeit one with a new spin on it, Nier: Automata falls into the Science Fiction genre, while being no less fantastical. Set in the year 11.945, Earth has been invaded by advance aliens and their mechanical soldiers – machine lifeforms, whose whole purpose is the conquering of the planet and the extermination of humanity. In response, after endless battles, humanity has been brought to the brink of extinction and had to flee to the moon, from where they now wage a seemingly endless war, using their own placeholders – highly sophisticated battle androids called YoRHa – , to reclaim their planet and defeat the occupiers. In Nier: Automata, the player gets thrust in the middle of that conflict and takes control of two such YoRHa units: A battle unit, called 2B and a scouter, called 9S. Together, they take on the invaders, start to uncover the truth behind the invaders, their human masters, the war they’re part of, why the mysterious unit A2 desserted YoRHa as well as their main adversaries – the machine lifeforms. And while the player experiences the conflict through their eyes, making use of all of its assets and gameplay elements, the game asks its ultimate question: What does it mean, to be human?
A stylised cast and environment, and a large, varied number of tracks
As mentioned in the video, the game delivers a strong message and uses all of its gameplay elements towards it. These range from the wide variety of stylisation in its cast members – which, given its sometimes risque nature, admittedly might be slightly off-putting to some people at first – , its varied, open and completely different environments – from overgrown and reclaimed by nature urban areas, to dry desserts to amusement parks, where machine lifeforms celebrate – , it’s different gameplay elements – featuring mainly hack and slash action game, with top down or 2,5D sections as well as bullet hell and other mini-games added to the mix – all the way to its soundtrack, which was composed by studio Monaca, with its lead composer Keiichi Okaba, and which has earned multiple rewards, most notably „Soundtrack of the year“ at the yearly Game Awards.
Japanese composer Keiichi Okabe is founder and chief executive officer of music production company MONACA, which provided the Original Soundtrack for NieR: Automata
What makes the soundtrack and Okabe’s work so impressive is, how it incorporates seamlessly into the story and extends it, telling its own tale in an additional layer, that gets added to the other storytelling elements. It does so, using a variety of different languages, styles – instrumental, vocal as well as using 8Bit sounds, among others – and musical elements, that create a unique, but strangely holistic sounding composition of tracks. Of course, what makes this soundtrack even more impressive, is its sheer length and variety of track – Keiichi Okabe provides the player with more than 6 hours of total track length, each track finely tuned to the exact moment it appears ingame, which might only be once during the whole playthrough, for a short time, but still, adding to the told story.
意味/無 – meaning/nothing, which could also be read as meaningless when put together, is just the first of 119 tracks the composer has uploaded to Youtube. And the name already hints at one of the games hidden messages
A shift in perspective and a strong social element that transcends borders and time
Another aspect, that makes the game very unique is, that it has to be played through a total of 7 times in order to get the complete experience and get the complete picture. And while this certainly can be another factor, that could be off-putting for many people, it is certainly worth every second spent in the game world.
A review by Skill Up, that encapsulates, what maakes the game so special, aptly titled „The Masterpiece You (Probably) Won’t Play“, due to the multiple playthroughs the player has to complete first
While it may seem ridiculous at first, to expect the player to play the same game multiple times, there is a logic and approach behind it, as with each playthrough, you take control of a different character and experience the story through their eyes, each time adding different things to the experience, putting emphasis on different things, unveiling more of the truth. And, while it might seem, that each playthrough is the same at first, that is only really true, as only those culminate in what appears to be the same ending. On the third playthrough, however, an additional secret is revealed, namely, that the player actually hasn’t experienced the same story twice, but rather a repeating cycle of the same occurances. This may seem like a negligible difference at first, but will become very important to the game’s overall message as the story progresses. And while the different endings may appear similar at first – though of course each character adds thier different experiences – here too, Okabe’s music hints at a deeper message.
Weight of the world – the first ending’s song, in English
„壊レタ世界ノ歌“ (Kowareta Sekkai no Uta, lit.: „Song of a broken world“) – the second ending’s song, in Japanese
It does so via the final soundtrack for each ending, that, while being the same melody and song, gets completely transformed by its lyrics. Because, while the first ending’s song, „Weight of the world“, is sung in English from 2B’s perspective, the second ending’s „壊レタ世界ノ歌“ (Kowareta Sekkai no Uta, lit.: „Song of a broken world“) is sung from 9S’s in Japanese and delivers a completely different meaning to the just experienced playthrough. And just like this, each added ending song, of which there are 4, for reasons I do not want to spoil right now, gets another added layer.
Nouveau – another ending song in French
This slow change in perception by the characters – via a musical expression – may be somewhat unnoticeable by the player at first, especially during the initial playthroughs, however, it is an important element, that leads up to its final end, its final crescendo, in „the End of YoRHa“. What makes this song special, is, that it makes the players aware of a gameplay element, that they may have not consciously realised was part of the game – while, of course, still making use of it – and brings them together in a way, that I have not yet seen in a video game since. It is an adequate end to a very emotional journey, that is made all the sweeter by its final, moving piece, that has all of the game’s development team joining in to a choir – a small detail, that once more highlights the togetherness and sense of belonging, of being part of a bigger whole the player may feel in that moment.
the End of YoRHa – the final track in the soundtrack, combining all previous experiences, before presenting the player with the ultimate choice
It is in this moment, after the player has overcome the so far biggest challenge, all the different story-hints and paths come together and they are at the height of their emotional investment, that the game shows its final message to the player, while presenting them with one, final choice. It is a masterclass, on how investment and immersion can work.
Case 3 – Arknights: „Lone Trail“ and „Under Tides“: A holistic approach to ingame events
As a final entry to the presented case studies, Arknights is a multiplayer mobile tower-defense game developed by Chinese developer Hypergryph, that is set in the world of Terra. On Terra, which could be seen as a more grimmdark fantasy copy of our own worlds, which is made up of multiple factions and nations, the population lives in constant fear of catastrophes – cataclysmic events, that can wipe whole cities of the map – as well as Oripathy – a cancerous desease caused by a catastrophe’s aftermath, that is lethal – while also struggling for survival due to prejudice, scarce resources and the different nations being in conflict. These nations, factions and conflicts are oftentimes heavily inspired by our own world, while being placed in an environment, that is far more hostile and unforgiving than ours – a dark spin to the world, so to speak. Similar is true for the game’s ingame events, that add to the overarching story, push the narrative forward and reveal more of Terra, its people and their struggles to the player. Two such events, as well as their aural theme – Arknights plays heavily with music and its themes, with its developer Hypergryph even building a whole sub-sector wholy dedicated to creating and promoting music in their company -, are the focus of this study. Coincidentally, the final theme of both events has been composed by the same composer – America-born Steven Grove
Composer Steven Grove composed the final soundtrack / boss – theme for both Lone Trail and Under Tides
Lone Trail
The official trailer of Arknights – Lone trail already hints at its theme
Lone Trail is a ingame event, that has been inspired by the space race of the 1950s and 1960s between the Soviet Union and the United States. Nowhere is that more apparent than in its visual design, which can be described as retro-futuristic and which aludes to its inspiration. The following video perfectly encapsulatey and exemplifies, how Lone Trail manages to engage the player while drawing its connection to our real, historic counterparts.
An analysis of the Visual Design that make up the Lone Trail event in Arknights
While the visual design of Lone Trail is one part, that needs to be mentioned to understand Hypergryphs holistic approach when it comes to their game design and the design of their ingame events, all with the intent purpose of engaging and investing the player in their built world, as previously mentioned, another is Steven Grove’s music-addition to it.
Steven Grove – Control’s Wishes are both the first and final piece of music the player hears when experiencing the story of Lone Trail
The final soundtrack, Control’s Wishes, which can already be heard in the trailer, gains a whole new meaning, when connecting it to both the just experienced story and its overall setting. It is a multi-layered and multi-facetted track, that can both add to the existing story, thus boosting the players immersion at the end, but can also be understood without much context, just through its composition and musical elements alone. It is once more interesting to see, how much a well-composed track can add in terms of perceived information and mood, without the user having to necessarily be consciously aware of it.
Professionally trained opera singer and voice actor MarcoMeatball reacts to Control’s Wishes and analyses the main elements of the song
Under Tides
The official trailer of Arknights – Under Tide features many of the aquatic elements – both visually and aurally
Another event, that follows a similar scheme in terms of its visual design and aural depth – and whose first and final piece also was composed by Steven Grove, is Under Tides. This final piece is named „Under Tides“, and while the player may be confronted with it during the initial trailer already, it is only during the final boss battle, that its true meaning becomes apparent.
Steven Grove – Under Tides is a boss battle theme, that has a slight twist to it
Because, while, due to the audiovisual storytelling of the trailer, the setting of the event and its connection to the sea already become obvious, it’s in the final boss battle, that all the aural and visual hints, that have been dropped before, become obvious.
Voice actor and opera singer MarcoMeatball also analysed the musical elements of Under Tides, revealing the true meaning, that the player is exposed to, without having played the event himself
It is during the final fight, or rather, at its conclusion, that the final twist of the event is revealed: Under Tides isn’t the battle theme of the enemy you are fighting, but rather of the player, more specifically of the Abyssal Hunters at his side, that he commands throughout the battle and most of the event. „The player is the boss“ is an endearing and exciting new twist on how battles can work in general, as letting them experience being the final obstacle, the ultimate foe to overcome, is something not many games have attempted so far and Steven Groves music provides a very fitting backdrop for the experience.
A short summary and reflection of my journey so far
Finally, it is important to mention, that all these case studies should, at best, provide a broad overview of my experiences to the reader and are in no case to be understood as an in-depth analysis of the discussed material, as doing so would require a much deeper engagement of each of the discussed elements and source materials at hand. It should, however, hopefully engage the viewer and show, how different small details and used design elements can naturally come together to create a bigger whole, all with the goal of creating an immersive, engaging and holistic experience to the user.
On my journey so far – which included research into storytelling, world building, animation, rigging, extended realities and more – I have become deeply fascinated by the topic of immersion and how to invest someone into a story. I am especially fascinated by experiences inspired by modern games and media and set in the virtual. As such, after reflecting on my findings, I have decided to further pursue immersion as my goal, with the final goal being to create a virtual reality experience, that hopefully manages to catch the user’s interest and allow them to completely immerse themselves in the world I am building, whichever shape or form it ends up taking. To refine my goal and to achieve it, I will continue my research into various topics around it and start first studies and small-scale tests into different applications and designs, that will then hopefully help me carve out my path and help me find my final world.
»Inspiration can come from anywhere – an image, a comment, something that’s happened to me or someone I know. It’s hard to know exactly.« – Nicholas Sparks
Nowadays we are lucky to sometimes know, where our inspiration comes from. We are looking at content on Instagram, TikTok and co. daily, and if we safe it, we know where it comes from. I have started a variety of lists on Instagram to get inspired. It is an easy, natural process, that will help me in the future. Of course, I can always check the messages, I send to people, mostly consisting of reels, but as long as there is no great way to locate specific videos on Instagram, my method is practical. I have structured my collection into the lists: Master Thesis, Cinematography, Animation, Stop Motion and Inspo [Inspiration in general]. All the fields, I could see myself doing my Master Thesis in at this snapshot in time. I wish I would have started this sooner, because everything before that system is a mess of reels that I saved. From cooking, to vacation, to interior design, over to fashion and plant care – even Marie Kondo couldn’t organise it now. Better late than never to start a working system.
What am I looking for?
For cinematography, I am looking mostly for editing styles, mood, music and cuts. I do get inspired by the way the videos are filmed, but in this area, I am more attuned in already. I have the visual eye, the hands, the vision but I am missing the experience in post production in film.
Animation and Stop Motion is a field, where I have a lot of theoretical knowledge, through constant consumption. My practical skills are in the beginning phases. So, I pretty much put every reel I like, no matter if it is the style, drawing, cuts, composition, idea, everything goes in there. These fields are exciting to me, because I am so new at it, that I enjoy every improvement, like a child learning to take a first step. It is all interesting, and I am proud of myself for just making something move, or being able to draw something on my iPad. Hopefully, I can stay in this stage for a long time, even as my skills progress {hopefully!}.
The Master Thesis List was the one, that started my collection. I suddenly saw many surf videos on Instagram, probably cause I have been saying the word „surfing“ a lot. It is not a full-on conspiracy „our phones listen to us“ {duh} seeing as I have surfed in the past, and went to a surf camp just last year. It is a list of either great female surfers, or surfing documentaries/videos or surfing in general.
The general Inspo List is as it is in the name already, for anything that I find inspiring. It does not have to be study related, it is just something, where I mentally go „ah, nice, never thought about it like that before.“. As I am a collector of random ideas and thoughts since I could walk, I believe that anything can be useful someday. So if you need a random object to fit into something you have broken, knock on my door, you might get lucky.
To answer my own question: I am looking inspiration. In any way or form.
Looking back up the lines of text of this post, I can see my mind wandering from one place to the next, without addressing my chosen title. Therefore, I will give you some names, that I have collected. Names of people, who I believe make great content. Do with that what you will.
showing process, comic style, drawing, great character animation, full of love
*My taste has started at „drawings that look cute“ to animation that tells as story. I just reflected that because I had to look at my list. I also saw, that I put a surf video into the animation list – mistakes are normal.*
I am going to make an extra blog post, showing more inspirational creators. I realised that I have saved still illustrations too, so might be interesting to creators from all specialisations. I believe it is interesting, because I always love to discover new „role models“.
Es gibt unterschiedliche Möglichkeiten, 2D Charaktere zu animieren. Für kleinere Bewegungen, wie das Bewegen eines Armes, des Gesichts etc. reicht es vollkommen, mit Parenting zu arbeiten. D.h. man setzt die Ankerpunkte der Gliedmaßen an die Stelle der Gelenke und verknüpft sie mit dem Pick-Whip Tool. Für Walk-Cycles oder allgemeiner, Bewegungen, die den gesamten Körper betreffen, ist es von Vorteil, nicht alles mit der Hand zu „key-framen“. Anhand eines Projektes mit einem „Bicycle-Cycle“ soll hier erklärt werden, wie ein Character geriggt (mit einem Skelett versehen) werden kann.
Charakter für die Animation vorbereiten
Der Character wurde in Adobe Illustrator gestaltet. Vektorbasierte Illustrationen bieten eine Reihe von Vorteilen gegenüber pixelbasierten. Während das Zeichnen in Photoshop intuitiver ist und Tiefe sowie Textur leichter zu simulieren sind, bietet Illustrator den Vorteil, dass jeder Teil einer Illustration schnell angepasst werden kann und skalierbar bleibt. Da ich in diesem Fall stilisiert arbeiten wollte, griff ich auf Illustrator zurück.
Um eine Illustration für die Animation vorzubereiten, gehe ich immer nach demselben Prinzip vor und stelle mir die Frage: was soll sich separat bewegen/verändern/animiert werden?
Überall wo sich ein Teil des Characters einzeln bewegen soll, ist es notwendig, diesen auf eine eigene Ebene zu bewegen. Um beispielsweise eine Haarsträhne separat animieren zu können, verschiebe ich sie auf eine eigene Ebene. Nach diesem Prinzip werden alle Ebenen vorbereitet. Wichtig ist auch, dass sich die Gliedmaßen überschneiden und im besten Fall in einem Kreis zusammengeführt werden. (Kugelgelenk)
Wichtig ist, dass selbst wenn kein Unterkörper sichtbar ist, ein Kreis oder eine Fläche existiert, an die der Oberkörper und die Beine geparentet werden können.
2. Duik Angela installieren Für das Rigging benötige ich ein Plugin. Duik bietet das mit After Effects kompatible Rigging Tool Duik Angela an. Es gibt verschiedene Varianten, wie es installiert werden kann. Entweder direkt am PC oder lokal über After Effects.
3. Vorbereitungen in After Effects Nachdem das Illustrator File in After Effects importiert wurde, kontrolliere ich erneut, ob alle Ebenen einzeln ansteuerbar sind. Auch die richtige Benennung, falls das in Illustrator noch nicht gemacht wurde, muss spätestens jetzt erfolgen. Danach setze ich die Ankerpunkte „richtig“. Sie werden an dem Punkt gesetzt, um den sie rotieren sollen. (Default = zentriert). Parenting kommt trotz dem Rigging zur Anwendung. Da ich kein Face und Hair Rigging geplant habe, werden Augen, Mund und Haare an den Kopf geparentet. Dadurch wird sichergestellt, dass sie auch bei Kopfbewegungen an der richtigen Stelle bleiben.
Dasselbe gilt für den Oberkörper und Unterkörper.
4. Let’s get started Alle Gliedmaßen – Beine und Arme werden einzeln geriggt. Der Arm ist in Oberarm, Unterarm und Hand unterteilt, da ich keine Fingerbewegungen animieren möchte. Mit dem Pen Tool zeichne ich eine Linie über den Arm, mit Punkten da, wo sich die Gelenke befinden. (ohne eine Ebene auszuwählen) Danach wähle ich Oberarm, Unterarm und Hand aus und wähle „Humanoid“ -> „Arm“. Duik Angela generiert nun die Bones für diesen Arm. Danach parente ich den Unterarm zum jeweiligen Controller, genauso wie die übrigen Gliedmaßen und benenne sie dementsprechend. (Leicht zu erkennen, da sich die Bones farbig markiert über den Teilen des Armes befinden) (Die Ebene Unterarm wird an den Controller Unterarm geparentet etc.) Ignoriert werden kann der Controller Arm Tip (in diesem Fall).
Im nächsten Schritt werden die drei Controller ausgewählt und „Auto-Rigg“ im Duik Angela Plugin ausgewählt.
Nun existiert ein zusätzlicher Controller, mit dem der gesamte Arm bewegt werden kann. Dieser wird an den Oberkörper geparentet.
Dasselbe wird für die übrigen Gliedmaßen wiederholt, allerdings werden die Beine mit dem Unterkörper verknüpft.
5. Animieren Das Rigging ist nun abgeschlossen. Mithilfe der Controller der Gliedmaßen können nun Bewegungen einfach dargestellt werden. Die einzelnen Teile der Gliedmaßen bewegen sich gemeinsam, es ist nicht notwendig, die Position einzeln zu animieren. Walk Cycles oder Bewegungsabläufe können simpel animiert werden. Noch immer ist es möglich, den „Gesamt-Controller“ beispielsweise eines Armes an weitere Ebenen zu parenten. Im speziellen war dies für mich bei einem Bicycle-Cycle sinnvoll, da der Fuß mit dem Pedal verknüpft werden konnte.
Building upon my previous blogpost on immersive level design, this post also aims to explore the topic of immersion and interaction within VR, MR and AR a bit more by looking into different interface and interactivity solutions currently available, some previously mentioned already, some new, that may increase intuitiveness and engagement from the user. While also still covering AR and MR solutions, the main focus of this blog post will be on VR, as it is more in line with my future plans and planned research.
Tracking, controllers, recognition and other interaction methods
With a wide variety of different VR, AR and MR headsets and technologies comes an equally wide variety of different input devices, interfaces and ways to interact with the created virtual environment. In VR alone, there exists a seemingly endless amount of different controllers, with each headset developer putting their own spin on it.
Different headsets, different controllers – HTC Vive Pro, Meta Quest 3 Pro, PS Move and Valve Index Pro (left to right)
However, controllers like these are by no means the sole mean to interact with virtual environments anymore, as with advancements in tracking, movement- and voice recognition, a vast variety of different input and interface methods has been developed alongside the conventional controller-based inputs.
Hands, eyes, voice and haptic feedback
As previously mentioned, constant advancements in available computing power, frequent optimisations as well as new technologies make it possible to create virtual experiences that are more immersive than ever.
One such advancement lies in tracking and how the tracked movement and data get processed. While hand and gesture tracking has been a long-time staple especially in AR headsets, due to their inbuilt sensors and tracking, it has also become one in VR and MR applications. To give a differentiation between hand-tracking, controller-based-tracking and gesture-tracking, more commonly known as gesture recognition, which all may appear similar at a glance, is quite simple. Hand-tracking, as the name suggests, track the actual movement of the hand within the virtual space.
Ultraleap’s 3Di, a small tracking camera for handtracking, comes with its own intergrated interface
Different than controller-based-tracking, it frees the hands for interactions, without relying on buttons or other inputs. Controller-tracking, in comparison, also tracks the hands movement, but instead of doing so directly, it tracks the hand-held controllers instead. These controllers usually come with a wide variety of buttons, joysticks and other triggers, that can be programmed and used to interact with the environment and input information. Last but not least, gesture recognition interprets specific hand movements or gestures made by the user and reacts in specific ways, allowing interaction and enabling control over certain parts of the virtual space. It can be understood as a specific form of hand-tracking, as specific parts of the hand get tracked, though, in this case, the gesture made is usually more important than the position of the hand compared to the rest of the body.
Ultraleap Leap Motion Controller 2, a gesture tracking controller with a wide variety of applications
While it may seem now, that the main focus of current interaction lies in tracking movement of extremities, mainly the hands, this is not true. Eye tracking, for example, is a gaze-based form of interaction, that makes use of tracking monitors, that follow the user’s eye movements to enhance realism, allow interaction and render specific parts more or less, thus deepening immersion as needed, while devices like Amazon’s Alexa, Microsoft’s Cortana or Google’s own Voice Assistant can long sicne be used in VR and MR as well, to control and interact with the virtual environment using vocal commands. Using a combination of these different tracking technologies can make the user’s environment feel much more responsive.
But not only the users inputs can be enhanced by new technologies to create a more immersive experience. Using haptic feedback systems, spatial computing as well as hardware solutions, that creates the illusion of seemingly endless virtual spaces even within a very limited physical environment, the immersion of the user and responsiveness of the environment can be increased even further. While haptic feedback gloves have already been mentioned in a previous blog post, it is important to mention that haptic feedback, in the broader sense, is in no way limited to the tactile layer anymore.
teslasuit – a full body haptic feeback suit, that also tracks movement and can be used for motion capture
Haptic feedback suits, like the one shown above, can provide live-responses of the virtual environment to the user via vibrations. This can create the illusion of a physical component being present in the environment. Furthermore, spatial computing, especially in combination with multimedia-rooms, like the TU Graz‘ VR Cave, which can be used to more seamlessly merge physical and digital worlds, allowing physical objects to be tracked and influence the virtual, while also allowing virtual objects to interact with the real environment. Additional use of hardware, that allows for extended movement of the user through the virtual space even when limited by smaller real spaces, like an omnidirectional treadmil, can further blur the line between the virtual and the real.
Virtuix Omni One – a Omnidirectional treadmil developed for gaming in VR
Things to consider
When presented with all these different options to track the user’s input and the data they provide, it can be very easy to be overwhelmed or lose track of the reason of why these movements should be tracked and / or used in the first place, namely, to provide a smooth, fun and immersive experience, that allows a maximum amount of user interaction with a minimum amount of effort on the users side. To ensure that, there are a few important steps to consider when designing such an interaction.
Abstraction, intuitiveness, optimisation and sense of security
A good first step when approaching the design of user interaction and interfaces for the virtual is mimicking the real world and its interactions, to increase both intuitiveness for the user and clearness when providing feedback. By adapting the sense of realism or choosing a certain level of abstraction for the inputs and / or interfaces, they can be simplified and made to fit the desired experience without distracting the user.
Frequent user testing, followed by refinement and optimisation of the employed systems, can increase responsiveness, accessibility and create a sense of security in the user when confronted with the virtual environment. Furthermore, the higher the continuity of the created content, both in design and in experience, and the more seamless the transition between the physical and the virtual, the easier it is for the user to engage, which also boosts self-confidence and security.
All in all, by making use of the different technologies described above, while being aware of the challenges and chances they bring and optimising and adapting the desired experience according to the user’s needs, it is nowadays possible to create amazingly responsive environments already. It is, however, still important to be aware of the ever present limitations of current hardware, but with how rapidly technology and development keeps progressing, the next solution might already be around the corner.
Next steps:
Look further into different VR and MR solutions and their respective issues
Research essential tools for creating immersive virtual environments as well as different game engines and their advantages and disadvantages
Check methods of engagement and interaction within these digital environments
To better understand, how to design immersive and engaging experiences in VR, AR and MR, and building on the previous post about issues and how to solve them, it is necessary to understand the importance of world building and level design on the overall experience. As such, this blog-post aims to provide a short overview on different considerations to be aware of, when dealing with the challenge of creating an immersive experience for VR, AR and MR. Same as with my previous post, however, I would like to dive even deeper into immersive world building and continue my research, so this entry is only a momentary summary and my research into this topic – especially since I plan to eventually read all of Richard A. Bartle’s „Designing Virtual Worlds“ – is still incomplete.
How to create engaging, immersive worlds using level design
Understanding the importance of space and spatial awareness
When designing VR and AR environments, it is crucial to remember that, different from other virtual experiences in front of a monitor or screen, such as games, videos or the like, in VR and AR the user moves his own body and his virtual avatar through the real and the virtual world. This may seem like an obvious and maybe not so important difference at first, but it has some implications, as the user’s physical space and interactions within it need to also be considered when designing. Furthermore, the user’s real life comfort zone, field of view as well as the virtual viewable distance need to be considered as well to understand how to engage with the space.
Using realism, stylisation and interactivity to boost immersion
Utilising the capabilities of modern graphics cards as well as making use of optimisation and / or stylisation options when designing virtual environments for VR and MR, it becomes possible to create both realistic and semi-realistic experiences, that still keep the user immersed. No matter the style, however, maximal attention should be given to light, sound and eventual haptic or other tactile feedbacks, as only a wholistic experience allows the user to perceive the created environment as real and thus allow for maximum immersion. In this regard, it is also important to be aware of how the user interacts and influences the created world. Integrating easy, intuitive and natural-feeling interactions, that take advantage of all the capabilities of VR and AR input devices, such as controller movement and position tracking, gesture tracking or even full-on inside-out tracking can influence how the user interacts with the virtual world.
Design intuitive spaces and provide clear clues
Understanding the perception of different spaces and how it may influence the user, be it in an AR environment, in MR or even completely virtual in VR, is important, as all these environments, compared to our usual computer screen, are perceived as 3-dimensional and thus need to be approached accordingly. By providing clear cues for navigating the virtual space, it becomes easy and safe to explore. To counter the limitations of a user’s real, phyiscal environment vs. its virtual counterpart, a wide variety of movement methods can be explored, such as teleportation, locomation or even simple walking, should the space allow it. This allows for a different approach to space than usual, as it further opens the environment up for exploration.
It is, however, important to always be aware and understand the limitations of such an experience, both in hardware – e.g., tracking and sensing capabilities and physical boundaries – and in the user’s own perception – e.g., how much they can handle without being overwhelmed by the stimuli and cues. A good example of such an overwhelming experience, that is sure to cause fatigue in the long term, is the short-film „Hyper-Reality“ by Keiichi Matsuda.
Hyper-Reality by Keiichi Matsuda
Be aware of storytelling and how it is affected by 3D
Last but not least, after exploration of both the virtual and the physical space, depending on whether the created experience has been in AR, VR or MR, has been opened to the user, it is paramount to understand, how storytelling may be affected by the 3D environment. A good approach to this would be to learn from the gaming industry, as it has created explorable semi 3D and 3D environments for a long time now. Of course, with AR, VR and MR, these spaces – these levels – get extended even further, with more feedback being added, more information being perceived, and more interactions being possible, so careful consideration should be given on which tools may be applicable and which may be unsuited.
The end goal, in any case, should be to consider innovative ways to tell a story, that seamlessly incorporates its narrative elements within the created user-experience. To make sure that the experience remains as complete as possible while offering free exploration remains the challenge when creating such a virtual space.
Next steps:
Look into interfaces unique to VR, AR and MR
Check how interaction with VR, AR and MR may work
Research how to boost engagement and create intuitive interactions
Continuing the notion of my previous blogpost of understanding VR, AR, MR and XR along with their strengths and weaknesses, this post aims to also provide insight into the different problems one may be confronted with when designing for or engaging with these technologies on the regular and how to best solve them. At this point, it is necessary to mention already, that, while this blogpost will provide a broad overview about the different challenges one may face along with some possible solutions, it is in no way a complete guide yet, as my research into this topic is still ongoing and thus, incomplete.
Discomfort, accessibility and other challenges
To provide a widely accessible, engaging and also comfortable and immersive experience it is important to address a multitude of issues that the user may experience, when engaging with VR, AR or MR. By being aware of said issues as well as their implications to the experience and how to solve them, a much more complex and immersive experience can be created.
Motion sickness and other discomforts
Extended exposure to virtual environments, be it via VR, AR or MR, can lead to a wide variety of discomforts, when not handled and prepared for accordingly.
Simulator sickness, which is a special type of motion sickness, occurs, when our visual and vestibular system receive conflicting information. This can happen, for example, when the visual display or overlayed displayed object suggest rapid motion, but the user’s body senses no corresponding movement. The resulting mismatch in information can lead to disorientation, nausea and discomfort.
Furthermore, extended exposure to close-up virtual objects or displays can leade to both eye strain and general fatigue, as the eye has to constantly strain to focus. This strain may be even increased, if the eye has to constantly focus on rapidly moving objects or re-focus due to overlays, interfaces or similar displayed objects moving back and forth or rapidly in and out of focus.
Last but not least, when switching between AR and VR – which may mainly occur when working with MR solutions – ,but also when switching between AR or VR and the real world, user’s may experience a short bout of spatial disorientation. This may happen, when the transition between the virtual and the real environment is either to abrupt or not enough spatial clues are present in either the virtual or the real environment for the user to orient themselves. This can then cause disorientation, dizziness or short emotional distress.
Accessibility and impairments
Another thing to consider when dealing with artificial environments is accessibility for people with certain sensibilities or impairments. To create an inclusive and immersive experience, a number of challenges have to be considered.
As VR and AR experiences currently heavily rely on visual elements and their space cues, it is very easy to exclude users with visual impairments. Whether it be complete blindness, colour blindness or other similar disabilities, a number of things needs to be considered to make the experience as inclusive as possible. In a similar vain, people with auditory, cognitive or motor impairments may have trouble interacting with the created environment as well. Be it challenging motion-based inputs, complex interfaces, information overload or lack of adaptability, users may find an interaction to be needlessly challenging for them and thus feel excluded from the experience, unless ample consideration is given. This, of course, is also true for users with special sensory sensitivities, such as to bright lights, loud sounds or intense vibrations.
As such, accessibility needs to always be considered when designing in and for such environments.
Possible solutions and their integration
These considerations may seem challenging and / or limiting at first, however, they can also be understood as a chance to create a more immersive, comfortable and holistic experience for the user. When designing virtual environments, it is important to constantly question oneself, whether or not ample consideration for the described challenges has been given and remember some of the solutions described below.
Comfort and customisation
To increase the user’s long-time immersion, and comfort while doing so, a number of small changes and customisation choices can already make a huge difference.
One such change is in how the software is structured in the first place. To give an example, by including different options to adjust volume or brightness of the headset, but also providing different levels of haptic feedback, the whole experience can be customised to fit people of different sensibilities. In the same vein, allowing for the adjustment of the displayed colours or sounds or adding audio descriptive elements can also provide more inclusivity for people with auditory or visual impairments. Following this approach, the chance of motion sickness occuring can also be decreased, by allowing the user to choose settings such as movement speed, field of view and add comfort vignettes, that allow the user to find settings, that minimise their overall discomfort when being exposed to VR or MR environments for an extended amount of time.
HaptX haptic feedback gloves
Sony’s accessability controller, compatible with the new PS5 and PS VR
Another way to increase comfort would be by making adjustments to the hardware and how the user may engage with it. Using headsets with reduced latency or the currently trending inside-out tracking, for example, can both reduce motion sickness by boosting the sense of being present within the current environment and also reduce the weight and thus discomfort the user is exposed to long-term while interacting with the digital environment, be it AR, VR or MR. Similarly, combining modern headsets with custom accessibility controllers or even haptic feedback gloves, a variety of different modes of interaction can be provided, that could potentially lead to a more inclusive environment. A few solutions hereby would be tracking gestures of different body parts and allowing for gesture or voice controls as alternative input methods for people unable to use more conventional controllers or adjusting the haptic feedback to provide more or less information depending on the user’s other sensory capabilities.
Control and adaptability
To conclude the described solutions, providing the user with a maximum amount of control, thus allowing them to adapt the experience to fit their specific needs, can greatly boost both inclusivity and immersion. This may seem like an obvious conclusion, at a glance, but the fact that there is still new technologies and ways being developed to create an even better immersive experience for the wide variety of user needs hints at the opposite. As such, it is paramount to always consider your targeted end user while developing and frequently question yourself not only how to best experience a created environment, but also the challenges some users may face when doing so and to react accordingly.
Next steps:
Look into immersive environments and how to create then
Check methods of engagement and interaction within these digital environments
I just finished watching Ted Lasso. It is an incredible show. The humour is great, the jokes are new, the characters are loveable and interesting, the story is great… I loved everything about it. Except that it has only 3 seasons. The writing is really inspiring, because it is so fresh and unexpected. Also it is filled with smart, touching and funny quotes. But not cheesy or embarrassing – that is a very difficult line to walk.
What is it about? In short: American college football coach Ted Lasso heads to London to manage AFC Richmond, a struggling English Premier League soccer team [Ted Lasso, IMDb].
I am in awe of the writing and direction. So, who made it? It was created by Brendan Hut, Joe Kelly and Bill Lawrence. I was actually surprised that it was created by men. I was sure that it was a more diverse direction crew. Turns out that there were many female writers too. Like Executive Story Editor: Phoebe Walsh and Staff Writers: Sasha Garron and also, Keeley Hazell, who has her name counterpart in the series. Otherwise, I do not think that the series could have understood women and their feelings so well.
The character writing was incredible. Taking a strong female lead, making her the antagonist and turning her into the saviour of all in one season is amazing. The show might be called Ted Lasso, but it is all because of Rebecca Walton.
A must-watch for everyone who wants to know how to tell a story, how to have perfect character development and basically how to make a masterpiece.
Loui Psihoyos is an American photographer and documentary filmmaker, best known for his work on environmental and conservation issues.
Some facts about him:
Early Life: Psihoyos was born in Dubuque, Iowa, on February 23, 1957. He grew up with a passion for photography and nature, which would later shape his career.
Photography Career: Psihoyos began his career as a still photographer, working for notable publications such as National Geographic and Smithsonian. His work often focused on nature, wildlife, and environmental issues.
Documentary Filmmaking: Psihoyos transitioned into filmmaking, using his visual storytelling skills to shed light on pressing environmental issues. His most famous work is the documentary film „The Cove.“
The Cove: „The Cove“ (2009) is a documentary, which exposes the annual dolphin hunt that takes place in Taiji, Japan. The film follows activists, including former dolphin trainer Ric O’Barry, as they uncover and attempt to expose the brutal practices of the dolphin hunting industry. „The Cove“ won numerous awards, including the Academy Award for Best Documentary Feature in 2010.
Impact: „The Cove“ had a significant impact, raising awareness about the issue of dolphin hunting and sparking international outrage. It led to increased scrutiny of the practice and efforts to halt it.
Other Projects: In addition to „The Cove,“ Psihoyos has been involved in other documentary projects focused on environmental themes. For example, he directed „Racing Extinction“ (2015), which addresses the ongoing mass extinction of species due to human activity. His most recent project is the Netflix documentary „You Are What You Eat: A Twin Experiment“ (2024).
Environmental Activism: Psihoyos is actively involved in environmental activism and uses his films as a platform to advocate for conservation and sustainability.
Organization: Psihoyos is a co-founder of the Oceanic Preservation Society (OPS), a non-profit organization dedicated to raising awareness about environmental issues through film, photography, and other forms of media.
Louie Psihoyos is a prominent figure in the world of environmental documentary filmmaking, using his skills as a photographer and filmmaker to raise awareness and inspire action on pressing conservation issues. A role model for sure.