#19 | Thoughts on auditory cues

So far, I’ve spent most of my time on the structure of the app, which dictates what the sound needs to support, but I haven’t worked on the auditory cues yet. These cues should support first responders when they are unable to look at the phone while following instructions. As this semester is coming to an end, this will not make it into my final prototype, instead I have started a free association to extend the visual cues with auditory cues.

What is needed?

My first thoughts were three requirements: An alert-like sound to draw attention to the device, a voice that gives instructions as shown in the emergency steps, and loop functionality.

One requirement for the alarm tone is that it must be audible to everyone, everywhere, as much as possible. Some emergency situations may involve difficult sound environments, so the frequency and volume of the sound should be chosen appropriately.

The second requirement is a voice that gives instructions and supports the visual information on an auditory level. The first idea for customized emergency information about one’s specific condition was to have the voice recorded by the person with epilepsy itself. But this would increase the interaction costs, because users would have to double the information input: textual and verbal. Even if the concept would be extended to support more than one language, the effort would increase and become less feasible for people due to lack of language skills. Second, it is uncertain whether the recording quality and the way each user speaks are sufficient to be clearly understood.

For this reason, a generated voice based on text input would be more appropriate for such a concept. AI-based speech engines have gotten better over time and are able to convert text to speech with very little processing time. The technology is there and the results sound more natural and authentic than ever before. In the case of user-created emergency steps, the speech must be generated in the background after the text is entered, so that it can be played back without delay in the moment the emergency steps are opened.

The loop functionality requires infinite repetitions so that the sound does not have to be restarted manually, but it should also allow muting when not necessary or too intrusive.

Audio timeline

I also thought about when the sound should be used.

After an alarm is triggered automatically by sensors or manually by the affected person and the timer is running, the alarm sounds. This continues until a first responder initiates the emergency steps by pressing „Start first aid“.

Next, a voice reproduces what is displayed on the screen in a well-pronounced and clear way. This is played in a loop until a user swipes to see the next emergency step. In this way, each piece of information is extended aurally until the emergency steps are completed by the user.

The information on the following screens does not require further auditory cues.

#18 | Prototyping tailored emergency information

Now for the most important feature: Allowing people with epilepsy to tailor emergency information to their own form of the disease could actually make a difference, since epilepsy can manifest itself in many ways. I iterated on this for the digital prototype.

Start an alert manually

As I found out in my interview with the Epilepsy Institute in Graz, the use of sensors to detect an incoming seizure should be more precise and reliable. Also, many people do not feel the so-called aura before a seizure. For this reason, a warning to bystanders should be triggered automatically. But providing a manual way to start the alarm could make additional detection devices unnecessary for some people with epilepsy. So I rewrote the label on the app’s start screen to add the „manually“ hint.

Providing general and customized emergency information

In order to differentiate between the default pre-made content for general seizure care and the ability to display customized emergency information, I changed the selection component. I replaced the dropdown, mainly for multiple options, with a single switch between default and custom states to simplify the interaction.

The default „General seizure information“ cannot be customized, but is displayed if selected. Once a user selects „Custom Information“, each emergency step can be changed visually and textually in the input fields. At this point, the default information serves as a starting point so the user doesn’t have to start writing from scratch and can keep some information. In addition, users can optionally upload graphics or capture photos to visually support the points they are making.

Amount of emergency information

While designing the emergency information settings, I asked myself how much information should be displayed in the settings and what’s the maximum amount of information that people with epilepsy should create.

Because of the tricky circumstances in emergency situations that require short instructions, I decided to limit the number of steps to five and the ambulance information to no more than the default nine. The content of five steps is more likely to be remembered, but uncertain under these circumstances. Also it is easier to navigate between less steps. It is always possible and preferable to choose less information.

Aspects to be discussed and next steps

For the sake of customization, many unanswered questions came up during the prototyping process.

  • Custom emergency information: How do we bold text or add bullets? Do we need these aspects here?
  • User profile: Where do we customize information about the person with epilepsy, such as their name, picture, and how they want to address first responders before the emergency steps are displayed?
  • Organization of custom emergency steps: How do we delete or rearrange emergency steps? How do we reset custom emergency steps to the default content?
  • Thank you screen: Should the thank you screen be customizable and where can it be edited?

Answering these questions may be necessary in the future, but in order to validate this app concept, to see if potential users will even understand it and get value out of it, I will leave it at that for now.

#17 | Prototyping gratitude advancement

In order to improve the interaction and overall experience between the person with epilepsy and their first responder, I introduced a new idea in the paper prototype: Once the emergency steps are complete, first responders can optionally leave their contact information in case the person is unconscious or is taken by ambulance. Using the contact information, the person with epilepsy can contact them afterwards to show their gratitude. I iterated on this for the digital prototype.

Types of contact

The first thing I considered for this feature was how to get back to your first responder and what type of contact would be best for that purpose. As it is likely that both people will be strangers, the contact method should be as low-threshold as possible when it comes to personal data. Either leaving a phone number or an email address would meet these requirements from my point of view. Since the input is not mandatory, I gave the possibility to skip this step with a button.

Thoughts on individualization

During this iteration, a thought emerged. Reaching out personally could be one option, but leaving a personal message such as a video or voice recording could be another. The first option would require less effort in terms of how people with epilepsy have to set up their app, but the latter option could provide more direct feedback. However, a recording of any kind would need to be more generalized, as the circumstances of incidents can vary. For this reason, I decided to go with the first option, which seemed simpler.

End of usage

The last consideration was how to end the use of the app for a first responder after they have started the process, followed the emergency steps, completed the process, and optionally left contact information. Now that the first responder’s task is complete, their access to the smartphone should be minimized. This could be done by returning to the lock screen. If the person with epilepsy needs to rest for a while, or if medical professionals take over after the incident, it is still useful to provide access to the medical ID and make it visible on the lock screen.

Next steps

So far, I have focused mainly on the practical use of this prototype. But for a holistic user experience, the emotional aspect needs to be added in any case. This could be done by getting more graphical and defining the overall visual design.

#16 | Prototyping emergency steps

I iterated on my paper prototype by working on the emergency steps and putting more thought into it. For advanced interactivity, I decided to use Figma as a digital prototyping tool.

Added Done button

First, my paper prototype did not yet include an interaction to end an alert after a person with epilepsy has had a seizure. Considering all the possible interactions that need to be displayed on these screens, I decided to place a button next to the elapsed time and named it „Done“. Since this interaction is related to the whole screen and the elapsed time, this seemed to be the right area.

This position could also prevent people from unintentionally ending the emergency information. It is still questionable whether this button is needed on every single screen throughout the steps, but I think it makes sense.

However, the main interactions of displaying the medical ID or making an emergency call can remain in the bottom area this way.

Added confirmation to end emergency steps

To prevent unintentional exits, I also added a confirmation dialog to make sure people know when and if they really want to end the emergency steps.

Thoughts about displaying a person’s name

At the same time, I was thinking about where to display the name of the person with epilepsy. By giving first responders the ability to refer to a person with epilepsy by their name, I assume they will have a more personal address. Since the emergency step screens already have a lot of elements on them, I decided to display the name once at the beginning before going into the emergency steps.

Added overlay for „When to call an ambulance“

I also added a collapsible element to the prototype that contains information about when it is actually necessary for first responders to call an ambulance. This can be opened at any time during the emergency steps. This should prevent the user from making premature emergency calls that are not necessary in every situation. I tried to keep the information short and easy to scan.

Next steps

At this stage of the prototype, two supporting aspects of the emergency steps are still missing: Visual support through graphical material for each step, and auditory cues to make the whole process more noticeable and usable when not looking at the phone. These features will need to be added in further development.

#15 | Feedback interview paper prototype

In order to validate my first prototype idea, our lecturer Birgit Bachler encouraged me to get in touch with a nearby interest group in Graz: Institut für Epilepsie IFE.

In the course of a 45-minute meeting, I was able to demonstrate my paper prototype on site and ask questions about my research results and the experience of my contacts. Regina and her colleague were very interested in my project.

Starting an alert to bystanders

Speaking from experience, Regina and her colleague told me that working with sensors to detect an aura before a seizure is much more feasible for a person who is going to have a seizure. Many people do not recognize the onset of a seizure, and when they do, it takes too long to manually activate an alert in the open app. Examples of such inputs are Fall Detection and Brainwave Detection.

Visibility of wearable devices

Some of the studies I found suggested making electronic devices less visible. Contrary to what I found in my literature review, Regina and her colleague never met people who were reluctant to wear conspicuous devices for fear of stigmatization. Instead, they are happy to have them.

Advancement: Expressing gratitude to bystanders

Regina and her colleague responded positively to my planned feature that would allow people to reach out to their supporters by leaving contact information. They had never seen such a personal approach in any device and thought it was a nice idea for human relations.

Advancement: Tailored emergency information

Throughout the conversation, we talked about different types of epileptic seizures, especially those that are not really addressed by technological solutions. They explained that tonic-clonic seizures are most associated with spasmodic movements, but seizures that are expressed through confusional states are often overlooked. When I explained my idea for an individualized feature, Regina and her colleague were excited about the idea: Allowing people with epilepsy to view emergency information tailored to their own form of epilepsy could potentially provide more targeted first aid.

Choice of medium

When I asked my contacts about an appropriate medium for my endeavor, they confirmed that a mobile application is indeed appropriate from their point of view. It doesn’t require anything more than a smartphone, which many people already have. Also, most of the people they treat are used to the existing technological solutions, which in most cases include a smartphone. In fact, they gave me a printed version of a seizure care pass where people can write individual instructions for their own condition. The downside is that this document can easily be overlooked by first responders and the content is less appealing to read.

Log feature

When Regina and her colleague mentioned that some users would appreciate having a log where they could see past incidents, I showed them the feature in my paper prototype.

Conclusion

In the end, I was provided with a lot of informational material, a warm handshake, and the possibility to always reach back to Regina and her colleague. Overall, this step was really valuable to me and it did not take much effort to get quick and useful feedback, even when using a low-fidelity paper prototype to demonstrate ideas.

Next steps

Next, I would like to incorporate the feedback I received from the Institut für Epilepsie into the development of my prototype and possibly contact them if I need more expert opinions.

#14 | Paper prototyping Ad-Hoc First Aid Care

With regards to the example of Aura: Seizure First Aid mentioned in the last blog post, I iterated on a paper prototype regarding „Ad Hoc First Aid Collaboration with the Public“. This prototype had its origin in an exercise at the very beginning of the course Design & Research 2.

Underlaying concept idea

The present prototype represents a mobile app, which has to be manually activated by persons with epilepsy in case of emergency, if they notice an upcoming aura. Surrounding bystanders are then addressed by visual and auditory cues to pay attention to the smartphone.

Having a look on the smartphone, bystanders are shown what’s the matter with the person and how to help. This follows a general set of emergency information which is applicable to nearly every type of epileptic seizure. Also bystanders receive information about when it is necessary to call an ambulance.

Advancement: Tailored emergency information

One of the biggest advancements might be the possibility to individually customize emergency information to an affected person’s specific condition. For this I added customizability to the app’s settings. Besides the general emergency information, which is set as a default, users are able specify the shown information steps to their own needs. A visual representation (picture, pre-made illustration etc.) and a short textual instruction in a few sentences can be chosen, and further steps can be added.

Advancement: Findability of medical ID

Another addition is a lock screen widget, once an alert has been activated by the person with epilepsy. In case of a locked phone, this piece helps to give an understanding and access to the first aid instructions as well as the medical ID. The latter is a often hidden feature on mobile operating systems, which gets more visible with the widget once an alert has been started.

Advancement: Expressing gratitude to bystanders

Lastly another advancement could be an extended way to end the app experience, when a seizure has been overcome: Besides communicating appreciation and recognition for the provided help, bystanders can optionally leave their contact details (e.g. a mobile number) for the affected person. Afterwards the rescued person is able to get in touch with its helper.

Auditory cues

As recommended by experts mentioned in the previous blog post the auditory level should play an important role as well, when it comes to the smartphone’s findability and support to follow first aid instructions. Obviously a paper prototype can not provide sound due to its material. This is where a, at least partly, digital component have to come in.

Next steps

Iterating and extending the paper prototype was quiet easy and quick. However its detail has to undergo refinements and the eventualities for various circumstances have to be considered. At this point it makes sense to transfer this haptic paper prototype into a digital prototype, to be able to add interactivity and sound.

#13 | Evaluation of existing solutions

As stated in blog post #11, the previous thoughts on my research journey were accompanied by doubts about the relevance and ability to make a serious contribution to the current state of research. To overcome these doubts, I allowed myself to dig deeper into the application areas mentioned that I could possibly focus on:

  1. Ad Hoc First Aid Care Collaboration with the Public
  2. Semi-Ad Hoc Care Collaboration During Transportation
  3. Prior Education for Secondary Caregivers at Workplace/School

The „Ad Hoc First Aid Care Collaboration with the Public“ approach still interests me the most and is the area that is most in demand according to the experts1. This is why I started here.

Ad Hoc First Aid Care

In my research last semester, I realized that there are very few solutions when it comes to mobile applications and wearable technology. Some seem to be poorly designed, others are still in the concept stage, or seem to be no longer in operation. I looked at the following solutions in particular:

  • Seizure First Aide by the Epilepsy Foundation Minnesota2
  • Dialog by the artefact group3
  • Aura: Seizure First Aid by PT. Supremasi Astadaya Sentosa4
  • Medistat Seizure SOS by Saksham Innovations Private Limited5

For the evaluation, I made a comparison of how these solutions matched the results of my previous research. Early on, I realized that the only serious candidate I could consider, according to my findings and the experts‘ recommendations, was Aura: Seizure First Aid. By coincidence, the designers had the same idea for the app as I did and were inspired by the same content provided by the Epilepsy Foundation.6

Evaluation of Aura: Seizure First Aid

Aura: Seizure First Aid’s core features include the following:

©️ PT. Supremasi Astadaya Sentosa

This application addresses the following findings and recommendations as follows:

Conclusion

All in all, Aura: Seizure First Aid already meets the majority of my findings. The app is reduced to providing a general step-by-step approach to securing a person with epilepsy with a great user experience. The affected person has to start an alarm by simply tapping on the app. Their smartphone will then alert nearby bystanders to help and make a decision if an ambulance is needed. As soon as the seizure is over, the process ends with thanking the first responders for their help.

Because epilepsy comes in many different forms, it is highly individualized for each person. Participants in the study expressed a desire to provide bystanders with individually relevant information. Therefore, a mobile app could also allow users to customize the information displayed.

In addition, the general public rarely knows where to look for medical information about a person experiencing a seizure. Therefore, making the information more visible and accessible could be another addition. This could start on the lock screen of the phone.

Stories from people with epilepsy reveal a desire to thank bystanders after they have helped. Sometimes this is not possible because the person has not regained consciousness yet. Providing an extended way to contact helpers after a seizure could be another meaningful contribution.

Next steps

Taking into account my findings, the given app can be extended to meet the needs of a similar target group that values customizability when it comes to shared emergency information. This could be an approach where I can start working on a paper prototype.

Resources

  1. Aehong Min, Wendy Miller, Luis M. Rocha, Katy Börner, Rion Brattig Correia, and Patrick C. Shih. 2021. Just In Time: Challenges and Opportunities of First Aid Care Information Sharing for Supporting Epileptic Seizure Response. Proc. ACM Hum.-Comput. Interact. 5, CSCW1, Article 113 (April 2021), 24 pages. https: //doi.org/10.1145/3449187 ↩︎
  2. Seizure First Aide. Apple App Store. Epilepsy Foundation Minnesota. Retrieved May 20, 2024, from https://apps.apple.com/us/app/seizure-first-aide/id1018276687 ↩︎
  3. Dialog. Artefactgroup. Retrieved May 20, 2024, from https://www.artefactgroup.com/case-studies/dialog/ ↩︎
  4. Aura: Seizure First Aid. Apple App Store. PT. Supremasi Astadaya Sentosa. Retrieved May 20, 2024, from https://apps.apple.com/us/app/aura-seizure-first-aid/id1564615234
    ↩︎
  5. Medistat Seizure SOS. Apple App Store. Saksham Innovations Private Limited. Retrieved May 20, 2024, from https://apps.apple.com/ro/app/medistat-seizure-sos/id1630889725 ↩︎
  6. Seizure Recognition and First Aid Certification. Epilepsy Foundation. Retrieved May 20, 2024, from https://www.epilepsy.com/recognition/seizure-first-aid ↩︎

#12 | User-centered Design in Wearable Expressions

The following paper „Understanding and Evaluating User Centred Design in Wearable Expressions“ by Jeremiah Nugroho and Kirsty Beilharz from the University of Technology Sydney was presented at the 2010 Conference on New Interfaces for Musical Expression (NIME). This paper describes the multi-dimensional design factors needed to create and evaluate what the authors define as „Wearable Expressions“.1

The authors differentiate between the following terms: Wearable Computing, Wearable Art, and Wearable Expression. According to the authors, Wearable Computing describes pocket-sized and portable computing systems that don’t necessarily have to be worn on the user’s body. These systems follow the principles of traditional screen-based desktop computing. Apart from that, the purpose of wearable art is mainly recognized as purely aesthetic, supported by technology. The authors introduce the term Wearable Expressions, which are the focus of their paper, defined as „smart gadgets or devices“ that users wear on their bodies and that contain “ certain computing intelligence“ to serve specific user tasks.

Given that this paper was written in 2010, at an early stage for consumer-ready wearable devices, it makes sense to sharpen the blurred lines between these terms to emphasize the focus on the user’s perspective rather than pure technology or art. Three years earlier, in 2007, Apple Inc. introduced the first iPhone as a wearable mobile device designed to fit in consumers‘ pockets.2 At the time, wearable technology was not as common as it is today.

Further the authors point out the lack of acceptance of wearable devices at the time. They cite the negative example of the Oakley THUMB Pro. At the time, these earphone-embedded sunglasses had several problems for users, such as low battery capacity and high market prices, as well as the common user habit of holding a phone to the ear. Issues such as cost, comfort, appearance, ergonomics, usability, and aesthetics had prevented the public from adopting new designs.

Compared to user habits and the industry’s technical capabilities in 2010, today’s device landscape is much more divers, which goes hand in hand with greater receptivity and adoption by potential users. Not only are products coming in more affordable price points, they are also becoming more trendy. One example is Bluetooth in-ear headphones, which seem to be the iconic everyday objects of the 2020s.

The authors state 12 shaping factors for wearable expressions:

  1. Size / dimensions
  2. Device positions
  3. Power source
  4. Heat
  5. Weight
  6. Durability / resistance
  7. Washability
  8. Enveloping / fabrication
  9. Functionality
  10. Usability
  11. Sensation
  12. Social connectivity

According to the authors, size, device position, power source, and weight are fundamental and highly interrelated factors. These aspects can affect the user’s comfort, appearance, perception, and interaction with the device. Therefore, designs should meet the user’s expectations and ergonomics in relation to their anatomy. Depending on the context of use, considerations such as whether a power source should be corded or cordless, as well as the overall weight of the device, strongly influence aspects such as mobility and muscular effort.

Compared to the past, best practices and standards seem to have been established. The authors already mentioned the wristwatch as one of the simplest attachments to the human body. Today, we see a range of smartwatches, fitness bands, health trackers, which are even implemented in their brand’s own technological ecosystem and communicate with each other. However, it seems that other forms of wearables, such as head-mounted displays or smart glasses, have not entered our everyday lives yet.

With the technological advances of the last 14 years, the mentioned issues of size, power source and weight of the hardware may not be a problem anymore. The possible range of functionality and features, both on the software and hardware side, seems to be less limited to a single device than it used to be. At the same time, today’s devices are more advanced than ever when it comes to durability and washability of hardware materials. As defined in the paper, these factors require design considerations for multi-contextual use, such as flexibility, absorbency (the body’s natural excretion), or heat distribution.

Instead, in relation to today’s technological possibilities, the focus of design must shift to the core of useful and user-centered concepts by ensuring the quality of usability, which is manifested, for example, in effortless navigation, and by reducing the total number of selected features. Higher relevance for users and their engagement can be achieved by truely enabling them to enhance their physical capabilities through the integration of sensations such as hearing or touch, on the one hand, and to connect with each other socially, on the other.

In summary, it is noticeable that this paper is at an early stage of research on consumer-ready wearable technology. This is not only because the authors emphasize that their research started with this paper and further steps are planned, but it is also recognizable when it comes to hardware issues preventing Wearable Expressions to be well designed, which we mainly do not need to face in the present time. Provided that the authors made a serious contribution to the state of the art at the time, it is important and right to start with a paper like this. Taking a position for a more human-centered approach at that time turned out to be groundbreaking for how technology should be designed today in an increasingly technological world. Some parts of this paper may be outdated, but other parts are still more important than ever.

Resources

  1. Jeremiah Nugroho, and Kirsty Beilharz. 2010. Understanding and Evaluating User Centred Design in Wearable Expressions. Proceedings of the International Conference on New Interfaces for Musical Expression. DOI: 10.5281/zenodo.1177867 ↩︎
  2. Wikipedia. (2024). Apple Inc. In Wikipedia. Retrieved April 20, 2024, from https://en.wikipedia.org/wiki/Apple ↩︎

#11 | Procedure in the Second Phase

In the first semester I was able to get an insight into how epilepsy first aid could be supported by technology. To take things further, I would like to start with a brief reflection on how the findings and recommendations of this research can be further processed for a real-world prototype and what the next steps might be.

Thoughts on research journey

After taking some time to reflect, I began to doubt how my previous research could lead to a meaningful contribution, since this topic is very focused on a specific use case (emergency) for a specific disease pattern (epilepsy), and there seem to be some promising solutions already out there.

To come to a conclusion, I see three options on how to proceed:

  • Option 1: Continue with my topic and start prototyping ideas. This would mean no more research than usual.
  • Option 2: Stay close to my previous research, but eventually choose a different use case or project approach. If necessary, look for similar areas where my research knowledge can be applied. In case of a change of direction, this could mean additional research.
  • Option 3: Completely change the topic. This would mean the highest amount of (new) research needed and could possibly lead to time constraints.

Weighing the options, a path between options 1 and 2 seems to make the most sense: The first step would be to recall my research findings (pain points, recommendations, etc.).

Next, I should try to evaluate how existing solutions align with what I have learned about the pain points of the target audience and the recommendations of the experts (option 1) to see if there is room for improvement or if a custom concept is even needed.

At the same time, I should be open to following other project ideas if the room for a serious contribution is too small or non-existent (option 2).

Given that this research phase is focused on prototyping, it is of course important to gain more insight into users and stakeholders.

Project approaches to follow

During my previous research, I found expert advice on areas of possible projects1. These include:

  1. Ad Hoc First Aid Care Collaboration with the Public
  2. Semi-Ad Hoc Care Collaboration During Transportation
    • Finding a Person in Charge & Care Information Sharing When Utilizing Public Transportation
    • Seizure Monitoring & Information Sharing While Driving
  3. Prior Education for Secondary Caregivers at Workplace/School
    • Information & Responsibility Diffusion
    • Facilitating Education & Stigma Reduction Strategies

Where to start

After a second look at what I’ve found out, I want to start evaluating existing solutions for „Ad Hoc First Aid Collaboration with the Public“, which according to the experts is the area that is most in demand. At the same time, it has my greatest interest.

Resources

  1. Aehong Min, Wendy Miller, Luis M. Rocha, Katy Börner, Rion Brattig Correia, and Patrick C. Shih. 2021. Just In Time: Challenges and Opportunities of First Aid Care Information Sharing for Supporting Epileptic Seizure Response. Proc. ACM Hum.-Comput. Interact. 5, CSCW1, Article 113 (April 2021), 24 pages. https: //doi.org/10.1145/3449187 ↩︎

#10 | Research recap

Over the past three months, I have primarily conducted literature research on the overall topic of First Aid Assistance for Chronic Diseases. During this process, this topic has evolved in diverging and converging research phases, becoming more and more detailed over time. At this point I would like to briefly recapitulate what I have learned so far.

Research development

With the announcement of my research topic, I got to know better what first aid and the chain of survival means. I also got a great impression of what interactive first aid applications exist and what areas they cover.

In the beginning, I started to keep the topic very rough, which allowed me to discover things. To understand what first aid consists of, I drew a bigger picture of the different aspects. I also had a first idea of what types of chronic diseases could be considered. Very early on, I realized that I was interested in the specific context of sudden emergencies.

As I delved deeper into the variety of chronic diseases, I discovered types that were suitable and less suitable for my research. I also learned the difference between diseases, symptoms and disease events. The candidates were cardiac arrhythmia, diabetes and epilepsy. What makes epilepsy different from the others is that not every seizure is life threatening. I found this interesting. That is why I decided to pursue this path in my research.

I looked deeper into epilepsy first aid and found a general approach that can be applied to any type of epileptic seizure: Get the person into a safe state and recognize when it is time to call an ambulance. So there is a difficult decision-making process involved.

I was lucky enough to find this very valuable scientific article that describes the current state of research for my central research question: How can we use technology to support first aid? This helped me immensely to understand the criteria that need to be considered, the pain points of people with epilepsy and their families, and recommendations for future technologies to address these issues.

In my search for tools and methods as a starting point for project work, I came across this scientific paper explaining and categorizing different design approaches. It highlighted the possibilities and limitations of design in the strict field of healthcare, which helped me to evaluate where to place a potential project.

Finally, I tried to find out more about untrained first responders who have experience helping a person with epilepsy. This turned out to be very difficult, so I tried to make assumptions about their perspective by studying seizure first aid stories. If further research on this is unsuccessful, I would need to gather information using other (empirical) methods.

Outlook

From this point of research, the next steps would include the following:

  • Gain greater insight for all stakeholders through additional (user research) methods
  • Further evaluate existing solutions
  • Consolidate insights, findings and other information into an initial concept