IMPULSE #8: ProtoPie 101 Crash Course – Part 2

After completing the first part of the ProtoPie 101 Crash Course, I was excited to continue my learning journey. Unit 2 focuses on creating micro-interactions, covering three key topics:

  1. Screen transitions
  2. Scrolling & paging
  3. Slide menus

The pace of the training picked up in this section, making it more engaging but also requiring more attention. Here are my takeaways from each topic.

Screen transitions

Trying out different transition types was interesting, and I particularly liked the smart transition, which is similar to Figma’s Smart Animate. It enables smooth custom transitions between two states as long as elements share the same name across scenes. Since this is a familiar concept for many designers, it makes ProtoPie feel intuitive right from the start.

A notable feature is the ability to add the system status bar of a specific smartphone frame without manually inserting it as a layer. This helps maintain consistent states and transitions between screens.

However, one limitation became clear: all transitions happen simultaneously without the ability to sequence animations. Thankfully, there’s a workaround. By animating elements first and then applying a smart transition, we can control the animation order. The instructor demonstrated using scale and move responses first before adding the transition.

Another useful feature introduced in this section was the reorder response, which allows changing the stacking order of layers. The four reorder options include:

  1. Move forward one in the stack
  2. Move backward one in the stack
  3. Move to the top
  4. Move to the bottom

Finally, adding a jump response enables smooth transitions between scenes. Once all responses are added, they can be sequenced independently in the timeline, making animations more dynamic and structured.

Three ways to do a screen transition:

  1. Built-in transitions
  2. Custom smart transitions
  3. Animating elements before applying a transition

Scrolling & paging

The crash course provided well-prepared pre-made material, which made following much easier.

A container is used to basically mask a scrollable area. Any container can be turned into a scrolling container, with three scrolling options:

  • Scroll (continuous movement in a direction)
  • Paging (stepwise scrolling)
  • None (no scrolling enabled)

Interestingly, the container tool is a standalone tool in the toolbar. The direction of scrolling can be set to vertical or horizontal and overscroll effects can be enabled or disabled. The process was extremely simple and intuitive!

Paging and carousel elements

With paging, it’s possible to position carousel elements stepwise in the center of the screen while scrolling horizontally. However, one issue emerged: the last item aligns with the screen’s edge instead of stopping at the center. To fix this, an invisible rectangle can be added at the end to create spacing. Initially, I thought this method is not elegant, but just moments later, the instructor introduced a better alternative – adjusting the properties panel. It’s great to see that ProtoPie anticipates these usability needs!

Constraints in containers

This was a short but important topic. By default, containers are anchored to the top-left corner and elements inside do not resize automatically. However, deselecting fixed height or width allows elements to resize proportionally. This feature is super useful for responsive design.

Three ways to create slide menus

As the instructor pointed out, there are multiple ways to achieve the same effect in ProtoPie. For slide menus, three trigger types can be used: drag, pull, and fling.

1. Fling trigger

The fling trigger was applied to a side menu layer, using a move response to shift it into view. However, there was a problem – the sidebar was off-screen, making it unclickable.

To fix this, the fling trigger can be applied to any visible object, affecting the sidebar indirectly. A better approach is to add a touch area, extending the interactive zone without requiring invisible layers. This is a clever and efficient way to improve usability!

To close the menu, a second fling trigger with a move response can be used or a reset response (as seen in Unit 1). One downside is that the trigger must fully complete before the menu moves, which feels slightly unnatural from a user perspective.

2. Pull trigger

The pull trigger works differently – it’s based on distance pulled rather than absolute position. This means the menu moves in sync with the user’s finger, making it a more intuitive way to implement slide menus. Additionally, when released, the menu will automatically snap open or closed based on its position.

The instructor also mentioned that the pull trigger is perfect for pull-to-refresh interactions, which is great to know!

3. Drag trigger

The drag trigger behaves similarly to fling but requires a move response. By default, dragging moves the object in any direction, but setting it to horizontal-only in the properties panel restricts movement.

A potential issue: Users can drag beyond the intended position. To fix this, custom limits can be set in the properties panel. However, the menu still doesn’t snap closed by itself.

To fix this, we use a touch-up trigger combined with a condition response. This allows defining a threshold – if the menu is more than 50% open, it stays open; otherwise, it snaps closed. Constraints make interactions much more flexible!

Further learning

At the end of this unit, the course provided additional tutorials showcasing what’s possible with ProtoPie. These examples were inspiring and raised my interest of advanced interactions. The unit concluded with another knowledge test, reinforcing the learning experience.

Conclusion

Completing Unit 2 of the ProtoPie 101 Crash Course was an exciting and rewarding experience. The structured approach made even complex interactions easy to follow and I now feel confident in creating sophisticated micro-interactions.

ProtoPie’s approach to prototyping continues to impress me – especially its flexibility, intuitive design and developer-friendly handoff process. I’m eager to continue learning and exploring more advanced features in the upcoming units.

Resources

https://learn.protopie.io/start

https://learn.protopie.io/course/protopie-101

https://cloud.protopie.io/p/1a8b65c2398caca10872b720?ui=true&scaleToFit=true&enableHotspotHints=true&cursorType=touch&mockup=true&bgColor=%23F5F5F5&bgImage=undefined&playSpeed=1

IMPULSE #7: ProtoPie 101 Crash Course – Part 1

Inspired by my colleagues, I thought signing up to the official ProtoPie 101 Crash Course would be a good preparation when it comes to the work piece in my master’s thesis. Since I possibly want to use sensor technology and the visual as well as auditory level of communication, ProtoPie promises to emulate what feels like a programmed app due to its advanced prototyping capabilities.

Getting started

Every chapter of the ProtoPie 101 Crash Course begins with a clear outline of „What you’ll need“ and „What you’ll learn.“ This approach helps set expectations and ensures users are prepared with the necessary tools and mindset before diving in.

Unit 1: Introduction to ProtoPie

ProtoPie introduces a unique conceptual model based on object + trigger + response. The logic is simple: If I tap the square layer, then it will move. This principle extends to multiple micro-interactions, like carousels.

The prototyping process in ProtoPie involves three key platforms:

  • Studio for creating prototypes
  • Player for displaying prototypes on mobile devices
  • Cloud for sharing and collaboration

To get started, ProtoPie outlines four essential steps:

  1. Add assets
  2. Make interactions
  3. Test & share
  4. Collaborate

The structure is intuitive, making the onboarding process smooth and easy to follow.

Step 1: Add assets

This step introduces knowledge tests with multiple-choice questions. For instance, I encountered this question:

„When you export an entire frame or artboard from your design tool, how does this get imported into ProtoPie?“

The answer options were:

  • Object in Scene
  • Scene
  • New Pie
  • Frame

At first, it was unclear what exactly constituted a scene, a frame, or an object in a scene. However, the narrator clarified these distinctions after the first question, making it much easier to understand.

Importing assets from Figma was incredibly smooth. Everything transferred seamlessly, making the process highly efficient.

Step 2: Make interactions

My task in this step was to create an interaction where an icon on a home screen wiggles after a long press, allowing position edits, and stops wiggling when exiting edit mode. To achieve this, I used:

  • Long press and tap as triggers
  • Rotate and stop as responses

Some standout features during this step included:

  • Duplicating triggers and responses, which made iteration much faster
  • The timeline feature, which allows responses to occur sequentially – this was an excellent addition
  • The reset response, which makes it easy to revert an interaction back to its initial state

Overall, creating interactions in ProtoPie felt logical, intuitive, and efficient.

Step 3: Test & share

Now came the phase of installing the ProtoPie Player app. Prototypes can be displayed by:

  • Connecting a mobile device to a computer via USB
  • Scanning a QR code when both devices are on the same WiFi network

The display experience was just as seamless as in Figma. Sharing prototypes was also straightforward. A link can be distributed with controlled access settings, determining:

  • Who can view the prototype
  • Who can download the file for further editing
  • What UI elements are visible when opening the link

This flexibility ensures that collaboration remains secure and structured.

Step 4: Collaborate

The course then introduced interaction recordings (formerly known as „interaction recipes“). Unlike other prototyping tools that attempt to generate code – often unsuccessfully – ProtoPie focuses on providing accurate interaction values for developers.

With interaction recordings, developers can:

  • Play and scroll through the timeline at different speed levels to fully understand the interaction
  • Share single recorded interactions via individual sub-links

Additionally, ProtoPie allows elements with interactions to be transformed into reusable components, similar to Figma. Once inside a component, interactions move with it. Components can also be uploaded to a team library, enabling:

  • Cloud-based collaboration
  • Editing in library mode with cloud-synced updates
  • Easy pasting of components from local files to the cloud team library (though changes need to be published before others see them)

Knowledge test & learning progress

The first chapter concludes with a knowledge test, designed to help measure comprehension. If results indicate gaps in understanding, the system suggests to do the chapter again before proceeding. I found this to be an excellent learning reinforcement tool. Additionally, test results can be downloaded for documentation, making it a useful feature for employer verification.

After completing this first chapter, I had reached 20% progress in the overall crash course. This made me eager to continue and explore the advanced capabilities of ProtoPie.

Final thoughts

My experience with the ProtoPie 101 Crash Course so far has been extremely positive. The course is structured in a way that makes learning both engaging and practical. The hands-on approach, combined with well-integrated knowledge tests, ensures that users truly get the concepts before moving forward.

ProtoPie’s approach to prototyping – focusing on visual interaction recordings instead of auto-generated code – stands out as a particularly developer-friendly and intuitive method. It eliminates ambiguity and allows for precise handoff between designers and engineers.

With 20% of the course completed, I am excited to see what’s next! Stay tuned for further insights as I continue my ProtoPie learning journey.

Resources

https://learn.protopie.io/start

https://learn.protopie.io/course/protopie-101

https://cloud.protopie.io/p/1a8b65c2398caca10872b720?ui=true&scaleToFit=true&enableHotspotHints=true&cursorType=touch&mockup=true&bgColor=%23F5F5F5&bgImage=undefined&playSpeed=1

IMPULSE #8 | Two YouTube Videos – Critical Voices on Digital Fashion 

Critical voices on the topic of “Digital Fashion” are numerous online. I recently watched two YouTube videos of creators voicing their skepticism on certain aspects of digital fashion.

YouTuber “Izzzyzz” offers a critical examination of the digital fashion industry, particularly focusing on the company DressX. Through a blend of humor, skepticism, and detailed analysis, she dissects the lofty promises of this burgeoning field, shedding light on its technological flaws, cultural implications, and environmental contradictions.

In recent years, the tech world has made bold forays into the fashion industry, promising a future where digital clothing would redefine how we dress, consume, and express ourselves. One company at the forefront of this movement, DressX, claims to revolutionize fashion through augmented reality (AR), virtual reality (VR), and artificial intelligence (AI). However, as explored by Izzzyzz, this vision may be more fanciful than feasible, marred by technological shortcomings, questionable environmental benefits, and cultural insensitivity.

Founded in 2020 by Ukrainian entrepreneurs Natalia Modenova and Daria Shapovalova, DressX markets itself as a digital fashion retailer, offering garments designed to be worn virtually—whether in metaverse environments like Roblox and Snapchat or through photos and videos using their Meta Closet app. This app allows users to map 3D-rendered clothing onto their images, ostensibly blending sustainability goals with tech innovation. On paper, the initiative sounds groundbreaking, yet in practice, its execution raises serious concerns.

One of DressX’s primary pitches is sustainability. By replacing physical garments with digital ones, the company aims to address overconsumption and waste in the fashion industry. Influencers, notorious for purchasing fast fashion in bulk to maintain fresh content, are a key target audience. However, the practical limitations of digital fashion—such as poorly rendered designs and awkward interactions with physical bodies—render it a hard sell. Furthermore, the company’s involvement in NFTs contradicts its sustainability ethos, as blockchain technologies are widely criticized for their substantial environmental impact.

Technological flaws also plague DressX’s offerings. The app’s functionality is often compared to basic Snapchat filters, with clothing options frequently clipping, shifting awkwardly, or looking amateurish. These failings are especially jarring considering the high cost of these virtual garments, which range from tens to hundreds of dollars. Even their flagship marketing content, aimed at showcasing the technology’s capabilities, features glaring imperfections that undermine their promise of a high-tech revolution.

Beyond technical inefficiencies, DressX’s broader implications raise cultural and ethical concerns. The app’s clothing frequently appears to favor slim, conventionally attractive bodies, reinforcing harmful beauty standards. Moreover, some of its marketing materials evoke problematic themes, such as the ability to change race or body type through virtual avatars, which risk trivializing significant social and cultural identities.

Despite these challenges, DressX has garnered attention from major brands and media outlets. Publications like Vogue and The New York Times have praised the company’s vision, while luxury labels like Fendi and Diesel have partnered with them to create digital garments. This acclaim has translated into financial success, with the company securing $15 million in funding in 2023. Yet, outside of the tech and fashion elite, consumer reception remains tepid at best. Online forums and social media comments often ridicule the product for its aesthetic shortcomings and inflated promises.

Izzzyzz’s critique highlights a recurring pattern in the tech industry: the promise of innovation often outpaces its practical application. Digital fashion might someday redefine how we engage with clothing, but DressX’s current iteration seems more like a tech gimmick than a revolutionary solution. While the concept of virtual clothing opens intriguing possibilities for creativity and sustainability, its realization requires far greater refinement and cultural sensitivity to truly resonate with the broader public.

Digital fashion remains a nascent field filled with potential and pitfalls. Companies like DressX are navigating uncharted waters, but the road to meaningful impact will require a commitment to addressing technological, environmental, and ethical shortcomings. Until then, the dream of replacing physical wardrobes with digital ones seems more like a speculative fantasy than a viable reality.

Another creator, fashion journalist Bliss Foster highlights in his somewhat critical video “The Problem with Digital Fashion” that he does not have a problem with digital fashion per se, but rather with how it is treated and talked about, as a part of a trend cycle and a technical fad. He acknowledges the value of digital fashion design itself, but also makes the argument that it should rather be treated as a separate discipline altogether. He exemplifies this by discussing a physical shirt that he owns and how all the material qualities and aspects of it are such integral parts of the design that it is impossible to reduce them away from fashion design. 


Sources

Bliss Foster. “The Problem With Digital Fashion,” April 4, 2022. Accessed January 5, 2025. https://www.youtube.com/watch?v=who1-kptZzo.

Izzzyzzz. “‘Digital Clothing’ Is the Newest Tech Scam,” November 15, 2024. Accessed January 5, 2025. https://www.youtube.com/watch?v=dWKrK91fk2U.

IMPULSE #7 | Business of Fashion Masterclass: “The Opportunity in Digital Fashion and Avatars” 

The digital transformation is no longer a distant concept for the fashion industry; it is an immediate reality that is reshaping consumer behavior, brand strategies, and the very nature of creative expression. This was the focus of an online session hosted by the Business of Fashion (BoF) on 18.11.2021, which explored the rapidly expanding opportunities in digital fashion and avatars. Industry leaders from Burberry and Jimmy Choo joined the discussion, sharing insights on how their brands are navigating this evolving landscape and embracing new possibilities offered by digital assets.

The event revolved around the rise of digital assets and their implications for the fashion industry. These assets encompass a range of categories, including virtual garments, customizable avatars, gaming-related digital items, and non-fungible tokens (NFTs). Research conducted by BoF underscored the growing consumer interest in these areas, revealing that half of the U.S. population is considering purchasing digital assets within the next twelve months. This trend is being driven by advancements in blockchain technology, the meteoric rise of gaming platforms, and the societal shifts catalyzed by the pandemic, which blurred the boundaries between the physical and virtual worlds.

Representatives from Burberry and Jimmy Choo offered vivid examples of how luxury brands are engaging with digital assets. Burberry’s collaboration with Mythical Games, for instance, led to the creation of „Sharky B,“ a digital NFT character featured in the game Blankos Block Party. The brand emphasized the importance of authenticity and creativity in its approach, going so far as to involve its physical design teams in the crafting of this virtual product. This ensured that Sharky B retained the exclusivity and attention to detail associated with Burberry’s legacy, while also allowing the brand to connect with gaming communities in a meaningful way.

Jimmy Choo, on the other hand, took a different route, partnering with artist Eric Haze to release a series of NFTs tied to a capsule collection. Their project included a charity auction of a unique digital-physical hybrid sneaker and a set of collectible “mystery boxes” featuring virtual designs. This initiative allowed Jimmy Choo to test the waters of digital engagement, attracting new, digitally savvy audiences while exploring how its brand identity could translate into the virtual space.

The session also examined the opportunities and challenges that digital assets present to the fashion industry. Gaming emerged as a particularly promising avenue for brands to engage with consumers. With billions of players worldwide, games provide an immersive environment where fashion brands can integrate their offerings, whether through skins, exclusive collaborations, or virtual runways. Additionally, NFTs and digital fashion enable brands to create and nurture communities, redefining the traditional notions of loyalty and membership. For instance, NFTs offer the potential to establish exclusive clubs or communities where membership confers both digital and real-world benefits. This blending of virtual and physical experiences is becoming increasingly significant as consumers spend more time in online spaces.

However, the journey is not without challenges. The rapid evolution of technologies like blockchain creates complexity in platform selection and strategy development. Brands must balance the allure of short-term experimentation with the need to build sustainable, long-term consumer relationships. As panelists noted, the volatility of NFTs and the speculative nature of the market require brands to be strategic in their approach, ensuring that their digital initiatives are grounded in their core values and resonate authentically with their audiences.

A recurring theme in the discussion was the importance of partnerships. Both Burberry and Jimmy Choo emphasized the value of collaborating with the right partners to navigate the complexities of the digital ecosystem. Burberry worked closely with Mythical Games to ensure its NFT character integrated seamlessly into the gaming environment, while Jimmy Choo partnered with a platform specializing in blockchain and digital collectibles to launch its NFTs. Such collaborations allow brands to tap into specialized expertise, ensuring they show up authentically and effectively in these new spaces.

Looking ahead, panelists shared their predictions for the future of digital assets in fashion. Gaming was identified as a particularly fertile ground for innovation and consumer engagement. The integration of play-to-earn models, where players can own and trade in-game assets as NFTs, offers a new layer of interactivity and value creation. At the same time, the panelists cautioned against rigid thinking and emphasized the need for brands to remain adaptable. The digital space is evolving rapidly, and those who can think creatively and redefine traditional notions of luxury behavior will be best positioned to succeed.

Another area of optimism was the potential for NFTs to build lasting connections between brands and consumers. NFTs offer an unparalleled opportunity to create a sense of exclusivity and authenticity, fostering deeper engagement and loyalty. For instance, owning a brand’s NFT could grant access to exclusive events, limited-edition products, or even virtual worlds where consumers can further engage with the brand’s story.

The fashion industry is uniquely positioned to thrive in the digital realm. With its emphasis on storytelling, creativity, and identity, the industry can leverage digital assets to craft compelling narratives and help consumers curate their digital identities. As consumers increasingly value self-expression in virtual environments, brands that invest in innovation, authenticity, and meaningful partnerships will lead the way into this exciting new frontier.


Sources

The Business of Fashion. “The Opportunity in Digital Fashion and Avatars | #BoFMasterclass,” November 18, 2021. Accessed January 5, 2025. https://www.youtube.com/watch?v=1IpRFzcYqkU.


BoF INSIGHTS. “The Opportunity in Digital Fashion and Avatars.” Accessed January 5, 2025. https://insights.businessoffashion.com/products/the-opportunity-in-digital-fashion-and-avatars.

IMPULSE #6 | Film: “Padmavaat” 

Recently, I watched the Indian movie „Padmavaat“. The film, which released in 2018 stirred up multiple controversies among some communities of the Indian public. Directed by acclaimed Indian director Sanjay Leela Bhansali, „Padmavaat“ is set in medieval Rajasthan. It revolves around the story of Queen Padmavati, who marries a noble king and lives a prosperous live. Soon, an ambitious Sultan hears of Padmavati’s beauty and forms an obsessive love for the Queen, sparking a plan to steal her away for himself.

Director Sanjay Leela Bhansali, who is known for his elaborate period dramas, had faced multiple instances of backlash from the Rajput community since the production of the film started. The controversy hinged mostly on grievances about the portrayal of Rani Padmavati, which were seen as insulting and misrepresenting. Protests even turned violent, with protesters vandalising sets and even attacking the director himself. After a rescheduling, the movie was finally released on 25 January 2018. 

One of the details of the movie which spurred backlash was the performance and costuming featured in the song. “Ghoomar”. Diegetic songs and choreographed dance numbers are a prominent aspect of Indian films and are featured in a large majority of films of all genres. On Twitter, critics argued that the song misrepresented the Rajput queen Padmavati, leading to claims of disrespect. Protests erupted in November 2017, with some asserting that Rajput queens never danced publicly. Women from the Rajput community joined the protests, voicing concerns that the depiction of the queen in revealing attire was inappropriate. They stated that a Rajput queen would neither perform before an audience nor expose her midriff.

The controversy also drew the attention of the Central Board of Film Certification. On 30 December 2017, the board requested edits to scenes showing Padukone’s midriff to present a more modest portrayal of Queen Padmavati. Instead of reshooting, director Sanjay Leela Bhansali used computer-generated imagery to implement the changes. A revised version of „Ghoomar“ was released on 20 January 2018, featuring Padukone’s midriff digitally covered.

The achievement of this digital clothing insertion is quite remarkable, since the dance number features complicated movements that not only pose an immense challenge for rendering a realistic movement and lighting of the fabric but are further complicated by the tracking the body of the actress Deepika Padukone. The original shot was not captured with tracking in mind, which must have made it extremely challenging for the CGI artists. Unfortunately, I was not able to find details on the process of how the digital clothes where inserted. It would have been a fascinating case study.

Why is this Topic Relevant for my Thesis?

The insertion of digital clothing to meet standards of modesty in different film industries around the world is quite a common practice. A different and more recent example apart from “Padmavaat” is the 2023 Hollywood film “Oppenheimer”, in which a nude scene featuring actress Florence Pugh was altered for some distribution markets by inserting a digital black dress.3

This censorship through digital clothes is one application of “digital fashion” that presents a very concrete real world use case. The implications of using these digital clothing insertions are worth discussing as a separate topic. Several aspects can be discussed, such as the validity of these modesty standards in the first place (which of course is a much larger and much more intricate topic). Alterations that are performed to the original vision of the costume designer by inserting these digital garments for censorship reasons are also an interesting discussion to have. I have not been able to find sources discussing whether the digital clothing designs were discussed with the costume designers of the respective movies. But the issue of whether or not they align with the original vision of the costume designer is questionable to say the least.

Images


1 “Padmaavat (2018).”

2 “Padmaavat Before and After: Look How They Secretly Covered up Deepika Padukone’s Midriff in Ghoomar.”

3 Rubin, “Florence Pugh’s Nude Scene in ‘Oppenheimer’ Covered Up by CGI Black Dress in India and Middle East.”

Sources

IMDb. “Padmaavat (2018)” January 25, 2018. Accessed January 5, 2025. https://www.imdb.com/title/tt5935704/.

Hindustan Times. “Padmaavat Before and After: Look How They Secretly Covered up Deepika Padukone’s Midriff in Ghoomar,” January 20, 2018. https://www.hindustantimes.com/bollywood/ghoomar-before-and-after-look-how-they-secretly-covered-up-deepika-padukone-s-midriff-in-padmaavat-song/story-6wceiAO2DAu3u0GgmJhdAM.html.

Rubin, Rebecca. “Florence Pugh’s Nude Scene in ‘Oppenheimer’ Covered Up by CGI Black Dress in India and Middle East.” Variety, July 24, 2023. https://variety.com/2023/film/news/oppenheimer-florence-pugh-nudity-india-cgi-black-dress-1235678665/.

IMPULSE #6: Potentials and Ethical Challenges of Brain-sensing Technologies

During my research I came across multiple TED Talks, that sounded really interesting to my topic of first aid for epilepsy. I decided to run a TED Talk watching session to learn about the most recent extraordinary findings and discussions about brain-sensing technologies. I did this, because the concept of my existing prototype relies on seizure detection to start an app alert to nearby bystanders to provide first aid. I had a look on the following with TED Talks:

Forecasting and preventing epileptic seizures

David Garrett’s 2022 TED Talk, Listening to the Brain: A Functional Cure for Epilepsy, dives into how neuromodulation implants can provide a „functional cure“ for epilepsy. His research shows that it is possible to predict seizures by tracking electrical activity in the brain. Garrett explains how brain excitability levels that exceed a certain threshold lead to an electrical storm, triggering seizures. His team developed ultra-thin carbon fiber electrodes to be placed into brains of living humans. This sensor technology is integrated into an epilepsy management system. These electrodes wirelessly transmit data, allowing AI-powered algorithms to detect seizure patterns and intervene before a seizure occurs.

Garrett’s work makes an example of the immense potential of brain-sensing technology. Once it is accessible for consumers, such advancements could drastically improve the quality of life for epilepsy patients. The ability to predict and prevent seizures could make constant supervision or emergency first aid not needed anymore. However, continuous brain monitoring raises concerns about user acceptance – how comfortable would individuals be, if they know their brain activity is being monitored and potentially controlled? While the technology offers freedom from seizures, it may also introduce anxieties about privacy and autonomy.

AI wearables for seizure detection

Rosalind Picard’s 2018 talk, An AI Smartwatch That Detects Seizures, builds upon this concept by demonstrating how AI-powered wearables can recognize seizures and alert caregivers. Her work was inspired by cases of Sudden Unexpected Death in Epilepsy (SUDEP), which claims lives more frequently than sudden infant death syndrome. The smartwatch, developed by her company Empatica, runs real-time AI to detect generalized tonic-clonic seizures and has received FDA approval. This could be a game-changer for people with epilepsy, enabling immediate emergency response and reducing deaths. However, as with Garrett’s implantable devices, widespread adoption will depend on user trust and data privacy assurances. Real-time health data collection is extremely valuable for medical purposes, but it also opens the door for potential misuse.

Breaking the stigma around epilepsy

Besides technological advancements, societal perceptions of epilepsy significantly impact those affected. Sitawa Wafula’s 2017 TED Talk, Why I Speak Up About Living with Epilepsy, highlights the emotional and psychological struggles individuals face. She describes losing her job and dropping out of school due to her seizures, leading to isolation and frustration. Through online blogging and advocacy, she found a way to empower others and change the narrative around epilepsy. Wafula’s talk shows the importance of combining technological advancements with public awareness and support systems. Brain-sensing technologies can provide medical solutions, but addressing stigma and ensuring societal acceptance are equally crucial for improving patients’ lives.

Ethical dilemmas in brain data privacy

Nita Farahany’s 2023 TED Talk, Your Right to Mental Privacy in the Age of Brain-Sensing Tech, shifts the conversation towards the ethical aspects of neurotechnology. As major tech companies integrate brain sensors into everyday devices – such as headbands, earbuds and watches – brain activity is becoming increasingly transparent. Farahany warns that while brain-sensing technology has immense potential for treating conditions like epilepsy and PTSD, it also presents unprecedented privacy risks.

Brain data is more sensitive than any other form of personal data. It can reveal emotions, preferences and thoughts, raising concerns about microtargeting and behavioral manipulation. Farahany calls for the recognition of cognitive liberty as a fundamental human right, which means that individuals must have control over their own brain data. Without well-thought ethical frameworks, neurotechnology could become a tool for surveillance and control rather than empowerment.

Expanding Our Understanding of the Brain

Finally, Ed Boyden’s 2016 TED Talk, A New Way to Study the Brain’s Invisible Secrets, presents an approach to understanding the brain’s microscopically small structures. Boyden’s team developed a technique using expandable materials – similar to those found in baby diapers – to enlarge brain tissue for easier examination. By physically expanding the brain, researchers can distinguish between biomolecules and recognize structures that may be responsible for neurological diseases.

Boyden’s work emphasizes the importance of fundamental research in brain science. While neurotechnologies are advancing rapidly, they still rely on a limited understanding of brain function. By developing new ways to study the brain, scientists can create more effective examinations and medical professionals targeted treatments based on solid understanding rather than guesswork.

Conclusion

The concept for a first aid app for epilepsy I initially brought into a prototype, that is suppose to be powered by brain-sensing technology, could be of great importances in ensuring timely first aid by strangers and medical assistance. However, by integrating predictive algorithms and real-time AI monitoring, such an app would need to be shifted towards the scenario before a seizure occurs. Also if a unit is included, that prevents the brain to have electrical anomalies which would lead to non-occurring seizures, the usefulness of this app to provide first aid instruction to public bystanders significantly decreases.

However, the success of such a technology depends on trust and ethical considerations. Continuous brain monitoring comes with concerns about privacy, data security and user acceptance. If individuals are afraid how their brain data might be used or shared, they may not to use the technology. Regulatory measures and transparent policies must be in place to ensure that brain data remains protected and is only used for the benefit of the user.

Ultimately, while a first aid app for epilepsy has the potential to better first aid care, it must be developed with both innovation and ethical responsibility in mind. By addressing privacy concerns and prioritizing user autonomy, we can create a future where technology truly empowers those living with epilepsy.

Resources

https://www.ted.com/talks/david_garrett_listening_to_the_brain_a_functional_cure_for_epilepsy?subtitle=en&lng=de&geo=de

https://www.ted.com/talks/rosalind_picard_an_ai_smartwatch_that_detects_seizures?subtitle=en&lng=de&geo=de

https://www.ted.com/talks/sitawa_wafula_why_i_speak_up_about_living_with_epilepsy?lng=de&geo=de&subtitle=en

https://www.ted.com/talks/nita_farahany_your_right_to_mental_privacy_in_the_age_of_brain_sensing_tech?subtitle=en

https://www.ted.com/talks/ed_boyden_a_new_way_to_study_the_brain_s_invisible_secrets?subtitle=en

IMPULSE #5: Last Gfü Meetup of the Year

© Institut für Epilepsie

Since I got in contact with the Institut für Epilepsie in Graz to conduct an feedback interview of my prototype earlier this year, I’ve been following their social media and website for any news regarding their institution. This was when I discovered the Gfü group („gemeinsam füreinander“), an initiative and a safe space for young people with and without epilepsy. This group meets once a month to do spare time activities and create community. It is driven by the ideas and impulses of its participants and is free of charge.

On 10th of December I got the opportunity to join one of the last meetings of the year. The Gfü group met at Hauptplatz in Graz to visit and take a walk along Graz‘ Christmas markets. We were a small group which consisted of five people. I met Tanja again, she is a certified epilepsy consultant and part of the team at Institut für Epilepsie. Along with her colleague Regina I got to know her when I had the mentioned feedback interview. Tanja was accompanied by her boyfriend. Two young persons in their twenties joined for the meetup. I was warmly welcomed and got to tell, how Tanja and I got in contact and what I do in the research for my master’s studies. It appeared to be a bit complicated to explain what I do in my research, but I knew to break it down to the core. Tanja’s boyfriend showed interest in my field of study and my topic which led us to have a nice exchange. He, who studied at FH JOANNEUM himself, works in software testing and knew about the importance of usability for digital products. He reflected my topic and its complexity would definitely be worthy of a master’s thesis.

After we went along Herrengasse and crossed Landhaushof, we got to the crossing at Schmiedgasse and Landhausgasse to have hot beverages at one of the Christmas stalls. That’s when I got to know both of the young people – for privacy reasons names and genders are not mentioned in this blog post. They asked me about my field of study and seemed interested as well. In course of the conversation we got to what they do in their lives. Without my asking and without any hesitation, they started talking about their individual forms of epilepsy. Previous to this meetup, it was important to me not to ask people about their disease actively and just have a conversation if people open up to this topic themselves. And this is what happened in the conversation between Tanja, the two young people and me.

The first person was diagnosed with focal seizures which are accompanied with side effects. This limits the person to the amount of visual and auditory stimuli that can be managed to perceive. The person told us that it was initially a plan to study music, but had to abandon the studies when the diagnosis with epilepsy came up. In general focal seizures emerge from just one part of the brain. Symptoms can greatly vary such as intense feelings, loss of sensory like smelling or tasting, change in consciousness, unusual and repetitive behavior. Before a focal seizure affected persons experience an aura, an upcoming feeling that a seizure is about to occur. When a focal seizure is over some people experience headache or muscle pain.

The other person has experienced generalized seizures. It must have been a drunken feeling with a narrowed field of vision and muffled hearing. The person was on their own and and watering plants in the garden when the first seizure occurred. While having a seizure the person picked up the phone, but was not able to speak properly. After this incident the person did not remember anything that has happened. Because of the diagnosis, the person decided not to go abroad for a year. Generally speaking, generalized seizures are originating from both sides of the brain. It can be characterized with loss of consciousness, falls, massive muscle contractions and weakness, staring into empty space and repeated jerking movements.

As I quickly noticed, both persons in their twenties were limited in their life choices due to the fact that they were diagnosed with epilepsy. This contact with people with epilepsy was important to me. This contact with people with epilepsy was important to me. Not only did I get in touch with people with epilepsy, but I also learned something for my own life. The disease with a thousand faces, but rarely visible, is not something you would expect a person standing in front of you to have. Reflecting on this, but not wanting to feel sorry for anyone, makes me realize how fortunate I am for my physical health. Epilepsy can affect one in ten people during their lifetime, but the majority remain unaffected. The fact that there are a lot of other possible diseases a person can have, which comes with a certain probability of being affected, makes us unaffected extremely lucky. It is something we should not take for granted.

Resources

https://www.facebook.com/photo?fbid=995075082664321&set=a.479284970910004
https://www.institut-fuer-epilepsie.at/gfue-gruppe/


https://www.ninds.nih.gov/health-information/disorders/epilepsy-and-seizures

IMPULSE #4: Understanding Epilepsy – Key Insights from an Online Training

Being in contact with Tanja from Institut für Epilepsie in Graz, I received a lot of resources about epilepsy. One resource immediately caught my attention: selpers.com is a Vienna based online platform, which offers scientific and free online courses. Its mission is to help people with chronic diseases to understand their condition, ease treatment and increase their quality of life. At its core selpers believes, that patients can influence their course of the disease through their behavior.

Besides trainings and livestream events users can use texts, video interviews, animations, podcasts and checklists to receive what they need to know in an interactive and easy-to-understand way. A blog features interviews with experts, new research and stories.

I chose the training course „Epilepsie verstehen“ held by Assoc. Prof. Dr. Gerhard Luef. I expected the course to be a good refresher and overview of the topic of epilepsy, because I felt I still have some blindspots that would need to be filled. In the following I will go through the contents and what I’ve learned.

1. What is epilepsy?

Dr. Luef starts with giving a definition about the chronic disease. Epilepsy is one of the most common chronic diseases worldwide. Around 10% of people will experience a seizure at some point in their lives and approximately 5% of them will develop epilepsy. Seizures can happen to basically anyone, even individuals without prior diseases, such as students experiencing lack of sleep. While a single seizure does not necessarily mean someone has epilepsy, multiple unprovoked seizures typically characterize the disorder. Although epilepsy is not curable, it is manageable through medication for two-thirds of affected individuals. Surgical intervention can offer a permanent solution in some cases.

A seizure occurs when brain cells discharge electrical signals in an uncontrolled manner, much like a „short circuit“ or „electrical storm“ in a specific brain region. In some cases, the entire brain is affected, leading to disruptions in all senses, including loss of smell, taste or even hallucinations. The most common type of seizure is focal, which arise from specific brain regions. Generalized seizures, on the other hand, involve widespread electrical discharges across the brain. Although epilepsy does not usually cause permanent damage, untreated seizures can lead to memory impairment over time.

2. Causes of epilepsy

Dr. Luef explains epilepsy itself is not inheritable, but a genetic predisposition can increase the likelihood of experiencing a seizure. Some genetic and pre-exisiting disorders, such as Landau-Kleffner syndrome or Down syndrome, may promote epilepsy. Additionally, brain injuries caused by stroke, tumors, head trauma or severe concussions can lead to epilepsy because of a developed long-term instability in brain function. In general, anything harmful to the brain, including sleep deprivation and excessive alcohol consumption, increases the risk of seizures.

3. Symptoms and course of disease

Focal seizures may begin with an aura – a warning sign like nausea or sensory disturbances – before leading to loss of consciousness. Symptoms may include unusual smells, tastes, or involuntary movements like hand twitching. Dr. Luef claims that observers often find seizures distressing, especially when symptoms such as foaming at the mouth or bleeding occur. However, people with epilepsy typically do not feel pain during a seizure, though they may experience muscle soreness or injuries afterwards.

Absence seizures, very common for children and teenagers, are characterized as brief episodes of unresponsiveness, often mistaken for daydreaming. After a severe seizure, individuals may fall into deep sleep and/or experience disorientation for minutes to an hour possibly. Those who suffer generalized seizures usually have no memories of the event. Dr. Luef states that minimal intervention for bystanders during a seizure is best: To ensure a safe environment is the priority. If warning signs like sensory changes occur before a seizure, documenting them if possible can help with diagnosis of doctors and management of the disease.

4. Diagnosis of epilepsy

According to Dr. Luef’s opinion first seizures should always be examined by a medical professional as soon as possible, ideally within 20 hours. Common symptoms include tingling sensations, regularly occurring déjà vu episodes or sudden nausea. Neurologists and pediatricians use various diagnostic tools to differentiate epilepsy from similar conditions like syncope (fainting) or psychogenic seizures.

A proper evaluation in a clinical setting includes medical history, eyewitness reports, and physical examinations, followed by an electroencephalogram (EEG) and an MRI. EEGs measure electrical activity in the brain to detect abnormal patterns. After a first seizure, patients may need to stay in the hospital for further monitoring.

5. Living with epilepsy

For most people with epilepsy, lifelong medication and therapy is necessary. In emergency situations, placing a person in the recovery position and minimizing injury risk is essential. For individuals planning a family, proper medical advice is crucial since some epilepsy medications can cause birth defects. Seizures during pregnancy pose risks for both the mother and baby, but this according to Dr. Luef should not prevent expectant parents to not have children.

Avoiding seizure triggers, such as sleep deprivation and alcohol misuse in relation to the specific form of epilepsy, is important. It can be dangerous to forget medication, especially during activities like hiking, swimming or driving. Certain careers, such as pilot, train conductor or military service, consciously exclude people with epilepsy due to safety concerns.

Epilepsy can be life-threatening in rare cases, such as Sudden Unexpected Death in Epilepsy (SUDEP), which occurs in about 1 in 1,000 patients. But most people with epilepsy can lead full and unrestricted lives. Open discussions about epilepsy, participation in support groups and spreading awareness are key to improving quality of life for those affected.

6. My message to you

In the last part Dr. Luef addresses affected people and their dependants directly. Epilepsy is not something affected people need to be ashamed of. It is a normal disease like any other. The affected should talk openly about their disease and get into an exchange. They should stick to their medication plan and ask a medical professional for help, when intolerances occur. Last but not least major risks should be avoided. Following all this makes living with epilepsy very easy for the affected.

Reflections

Overall, this online training provided a very good and comprehensive overview. I felt quite well informed and wished I had taken this training earlier. It really helped to fill in the gaps in my knowledge. It was interesting to hear an expert’s perspective on the disease, especially the part about people being shocked when they witness an epileptic seizure. This is valuable to know for what I would like to test in my potential master’s thesis: Is a mobile device capable of delivering first aid information to public untrained bystanders? Finally, hearing all the aspects of what can cause seizures and how basically anyone can be affected left me feeling grounded and grateful for my health and the choices I make in my daily life, such as not wearing a helmet when riding a bike.

Resources

https://selpers.com/ueber-selpers

https://selpers.com/epilepsie

https://selpers.com/epilepsie/epilepsie-verstehen

→ Impulse_02 | World Usability Congress

On 16&17th October, we attended the World Usability Congress 2024 as last year. While there were many interesting talks on UX/UI design, most didn’t connect directly with my thesis topic. However, one presentation stood out—Kent Eisenhuth’s Lessons Learned From Our Accessibility-First Approach to Data Visualization. It gave me fresh ideas about how to approach data visualization, and I want to share some of the key points that can useful for me.

  1. Clarity Makes Data More Universal
    Kent emphasized the importance of making visualizations simple and clear. Overly detailed or complex charts make it harder for people to understand the message. This idea connects with my work on language visualization. If visualizing the sounds of words or sentences becomes too busy, it could lose its meaning.
  2. Accessible Visuals Help Everyone
    One takeaway that stuck with me was how accessibility benefits all users, not just those with disabilities. For example, using high-contrast colors, patterns, or clear text labels in data visualization makes it easier for everyone to interpret, even in situations like low light or small screens. I realized this applies to visualizing language sounds too—users should be able to understand the visuals without needing extra effort.
  3. Using Multiple Senses
    Kent talked about going beyond visuals by incorporating sound or touch into data visualizations. This reminded me of the potential for multisensory designs in my language visualization project. For example, visualizing how German or English sounds could include audio feedback to make the experience richer for users.
  4. Focus on the Message
    A big part of accessibility is focusing on what you’re trying to say with the data. Kent showed examples where the message was lost because of too much decoration or unnecessary details. For my work, this means focusing on the core idea: how different languages “look” when spoken, and not getting distracted by extra effects.

Conclusion

Kent Eisenhuth’s talk made me think about how important accessibility is in data visualization and how I can use it in my thesis. Visualizing language sounds in a way that’s clear and easy to understand is essential for making it useful to others. Overall, it was a very inspiring talk!

→ Impulse_01 | Klanglicht

Klanglicht 2024 was both challenging and exciting for me. Having the chance to showcase two projects, I experienced some of the most rewarding but also stressful moments throughout the process. We presented Sonolux Speculative Future in the Young Masters exhibition with Mahtab Jafarzadeh Miandehi, Hannah Albrecht, Francisco Sylla, and David Laßlberger, and Langnicht as part of Spektrum. Klanglicht festival is such an inspiring event for me and my thesis, featuring installations that combine sound and visuals in creative ways.

© Max Kathan

For this project, we created a multi-sensory installation inspired by research into the acoustic ecology of the Hilmteich area in Graz. It explored how urbanization—like the construction of the LKH hospital—impacts this natural space. We imagined speculative futures for Hilmteich, showing two scenarios while reflecting on its current state. This installation was featured in the Young Masters exhibition at Schlossbergstollen.

David and I worked on audio-reactive visuals using TouchDesigner, which was a great way to practice my skills and showcase them in an exhibition. On top of that, we all worked together to construct the installation, which made the whole experience even more challenging but good 🙂

Being part of Young Masters was truly amazing. I’m really proud of our team and what we did!

© Max Kathan

For Spektrum, we transformed the nave of Antoniuskirche in Graz into an immersive media experience. The title reflects the exploration of light and sound spectra, as well as the broader concepts of diversity and range. Our narratives focused on the in-betweens—like emotional and meaning spectra, harmony and chaos, colors and movement.

This was made possible with the guidance of our amazing lecturers: Astrid Drechsler, Daniel Fabry, Michael Kernbichler, Didi Mosbacher, and Roman Pürcher.

Langnicht explores the spectrum between nature and technology. The entire process—from planning to building to the final presentation—was such a mix of excitement, learning, and occasional stress. Our team grew so much during this time, and we created something we’re all really proud of. For me, working with After Effects and contributing to the construction was especially inspiring.

© Max Kathan

Conclusion

I feel really proud and lucky to have been part of Klanglicht 2024! It was an amazing experience to not only showcase our work but also to see and connect with other artists, exchange ideas, and receive feedback from professionals from the field of design. All the hard work, stress, and long hours were absolutely worth it. I’m so grateful for this opportunity and everything I’ve learned along the way! ✨💙