Touchless interfaces for healthcare communication Exploring touchless interfaces to support healthcare workers when communicating on mobile devices Master’s thesis in Computer science and engineering OLIVIA JARL AMANDA OLSBY Department of Computer Science and Engineering CHALMERS UNIVERSITY OF TECHNOLOGY UNIVERSITY OF GOTHENBURG Gothenburg, Sweden 2024 Master’s thesis 2024 Touchless interfaces for healthcare communication Exploring touchless interfaces to support healthcare workers when communicating on mobile devices OLIVIA JARL AMANDA OLSBY Department of Computer Science and Engineering Chalmers University of Technology University of Gothenburg Gothenburg, Sweden 2024 Touchless interfaces for healthcare communication Exploring touchless interfaces to support healthcare workers when communicating on mobile devices OLIVIA JARL AMANDA OLSBY © OLIVIA JARL, 2024. © AMANDA OLSBY, 2024. Supervisor: Thommy Eriksson, Department of Computer Science and Engineering Mentors: Sophia Atif, Lisa Bark & Jacob Gunnarsson, Ascom Examiner: Staffan Björk, Department of Computer Science and Engineering Master’s Thesis 2024 Department of Computer Science and Engineering Chalmers University of Technology and University of Gothenburg SE-412 96 Gothenburg Telephone +46 31 772 1000 Cover: Illustration of a healthcare worker surrounded by various elements related to this thesis. Typeset in LATEX Gothenburg, Sweden 2024 iv Abstract In healthcare, efficient communication is crucial for delivering proper patient care. The utilisation of individual communication devices for healthcare workers is grow- ing more widespread to optimise workflows and address alarm fatigue. With this change comes challenges, considering the high demand for infection control and the many hands-on tasks performed by healthcare workers. Touchless interfaces have emerged as a possible solution to face these challenges. Therefore, the purpose of this thesis is to answer the following research question: What should be considered when designing touchless interfaces to support healthcare workers when communi- cating on a mobile device? To address the research question, a pre-study followed by an iterative design process was conducted. Five separate visits to hospitals took place to observe and identify the unique needs of healthcare workers and the problems they face. These visits resulted in 12 prob- lem areas exemplifying areas where touchless interfaces could be beneficial. Sub- sequently, requirements were formulated based on the pre-study and user studies, indicating what should be considered when designing touchless interfaces. Through the requirements, it became evident that eye tracking was not a suitable option for healthcare workers. However, both voice user interface and gesture-based interface emerged as viable alternatives. Two prototypes were developed to exemplify dif- ferent approaches to implementing the requirements. The prototypes Myco Mini and Myco Main are two compact devices that assist healthcare workers in various situations by enabling interactions using a voice user interface and a gesture-based interface. The evaluation of the prototypes showed that the voice user interface was more approved, both in terms of its ease of use and social acceptance. Therefore, ad- ditional studies are needed to research gesture-based interfaces before implementing them for communication purposes in healthcare environments. Keywords: Interaction design, touchless interfaces, voice user interface, gesture-based interface, healthcare Touchless interfaces for healthcare communication Exploring touchless interfaces to support healthcare workers when communicating on mobile devices OLIVIA JARL AMANDA OLSBY Department of Computer Science and Engineering Chalmers University of Technology and University of Gothenburg v Acknowledgements We would like to express our sincere gratitude to all those who have supported and guided us throughout the completion of this master’s thesis. Firstly, we thank our mentors at Ascom, Sophia Atif, Lisa Bark and Jacob Gunnarsson, for formu- lating the initial track for this interesting project and devoting time to guide us with wise words. We would also like to thank Anette Meritähti at Ascom for your valuable input. Furthermore, we want to direct our gratitude to Thommy Eriksson, our supervisor at Chalmers, for your helpful feedback and interesting conversations throughout these months. Lastly, we express our greatest appreciation to the people we met during our user studies at the hospital departments. Without your valuable contributions, this project would not have been possible. We especially want to direct a big thank you to the medical department, which we had the privilege of visiting several times. Olivia Jarl & Amanda Olsby, Gothenburg, June 2024 vii Contents 1 Introduction 1 1.1 Aim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Deliverables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.3 Research question . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.4 Delimitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.5 Ethical considerations . . . . . . . . . . . . . . . . . . . . . . . . . . 2 2 Background 5 2.1 Ascom . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.2 Research problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.3 Related work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.3.1 Touchless interfaces in healthcare . . . . . . . . . . . . . . . . 7 2.3.2 Touchless interfaces in other contexts . . . . . . . . . . . . . . 8 3 Theory 11 3.1 Touchless interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 3.1.1 Voice user interface . . . . . . . . . . . . . . . . . . . . . . . . 11 3.1.2 Gesture-based interface . . . . . . . . . . . . . . . . . . . . . . 12 3.1.3 Eye tracking Systems . . . . . . . . . . . . . . . . . . . . . . . 13 3.2 Haptic interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 3.3 Multimodal interface . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 3.4 Usability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 4 Methodology 17 4.1 Research through Design . . . . . . . . . . . . . . . . . . . . . . . . . 17 4.2 Wicked problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 4.3 Design thinking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 4.3.1 Empathise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 4.3.2 Define . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 4.3.3 Ideate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 4.3.4 Prototype . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 4.3.5 Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 5 Planning 25 ix Contents 6 Execution 27 6.1 Pre-study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 6.1.1 Literature review . . . . . . . . . . . . . . . . . . . . . . . . . 27 6.1.2 Benchmarking . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 6.2 Empathise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 6.2.1 Pilot study . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 6.2.2 Interviews & observations . . . . . . . . . . . . . . . . . . . . 29 6.3 Define . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 6.3.1 Affinity diagram . . . . . . . . . . . . . . . . . . . . . . . . . 33 6.3.2 Problem areas . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 6.3.3 Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 6.3.4 Persona & scenario . . . . . . . . . . . . . . . . . . . . . . . . 36 6.4 Ideate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 6.4.1 Brainwriting & braindrawing . . . . . . . . . . . . . . . . . . . 38 6.4.2 SCAMPER . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 6.5 Prototype I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 6.5.1 Low-fidelity prototyping . . . . . . . . . . . . . . . . . . . . . 39 6.6 Test I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 6.6.1 Opportunistic evaluation . . . . . . . . . . . . . . . . . . . . . 44 6.6.2 Pugh matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 6.6.3 Result of Test I . . . . . . . . . . . . . . . . . . . . . . . . . . 48 6.7 Prototype II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 6.7.1 High-fidelity prototyping . . . . . . . . . . . . . . . . . . . . . 48 6.7.2 Low-fidelity prototyping . . . . . . . . . . . . . . . . . . . . . 50 6.8 Test II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 6.8.1 Usability testing - Gestures . . . . . . . . . . . . . . . . . . . 51 6.8.2 Usability testing - Myco Mini & Myco Main . . . . . . . . . . 51 7 Results 55 7.1 Problem areas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 7.2 Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 7.2.1 Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 7.2.2 Social aspects . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 7.2.3 General functionalities . . . . . . . . . . . . . . . . . . . . . . 61 7.2.4 Alert functionalities . . . . . . . . . . . . . . . . . . . . . . . . 62 7.2.5 Call functionalities . . . . . . . . . . . . . . . . . . . . . . . . 63 7.2.6 Usability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 7.2.7 Voice user interface . . . . . . . . . . . . . . . . . . . . . . . . 66 7.2.8 Gesture-based interface . . . . . . . . . . . . . . . . . . . . . . 66 7.3 Myco Mini & Myco Main . . . . . . . . . . . . . . . . . . . . . . . . . 67 7.3.1 Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 7.3.2 Interaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 8 Discussion 81 8.1 Execution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 8.2 Benefits & drawbacks of touchless interfaces . . . . . . . . . . . . . . 82 8.3 Validation & generalisation . . . . . . . . . . . . . . . . . . . . . . . 83 x Contents 8.4 Ethical considerations . . . . . . . . . . . . . . . . . . . . . . . . . . 85 8.5 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 9 Conclusion 87 A Appendix 1 I xi Contents xii Glossary Alert - An audible or visual signal in hospitals, for example, indicating when a patient needs immediate assistance from healthcare workers. (Swedish: Kallelse) Emergency department - Hospital department providing immediate medical care for urgent conditions. There are emergency departments that specialise in specific fields. For example, gynaecological emergency departments specialise in the care of women’s reproductive organs, and pediatric emergency departments specialise in the care of children. (Swedish: Akutmottagning) Infection department - Hospital department focused on preventing, diagnosing, and treating infectious diseases. (Swedish: Infektionsavdelning) Intensive care unit - Specialised unit for critically ill patients with close monitoring and care. There are specialised intensive care units. One example is neuro intensive care units, which provide care to patients who have suffered from acute diseases and injuries related to the nervous system. (Swedish: Intensivvårdsavdelning) Medical department - Hospital department for admitted patients where they diagnose and treat diseases in internal organs. (Swedish: Medicinavdelning) Neonatal department - Hospital department specialising in care for newborn infants. (Swedish: Neonatalavdelning) Practical nurse - A healthcare worker who assists nurses and doctors with patient care tasks such as taking vital signs, administering medications, and helping patients with daily activities and personal hygiene. (Swedish: Undersköterska) Speciality residents - Doctors who are undergoing advanced training in a specific medical speciality, such as anaesthesia or surgery. (Swedish: ST-läkare) Telemetry - The remote monitoring of a patient’s vital signs and medical data using telecommunication technology. (Swedish: Telemetri) xiii 1 Introduction This master’s thesis addresses the challenges healthcare workers face when con- ventional mobile devices, such as phones and pagers, prove impractical. Touchless interfaces are explored as possible solutions for these challenges. Touchless interac- tion requires no physical contact between the user and any component of an artificial system (de la Barré et al., 2009). There are several types of touchless interfaces, such as voice control, gesture tracking and eye tracking (Iqbal & Campbell, 2021). This thesis is in collaboration with the company Ascom, and they formulated the initial track of the project. Ascom globally supplies the healthcare industry with information and communication technologies and mobile workflow solutions (Ascom, n.d.-a). One of Ascom’s product categories is mobile devices, and they provide feature phones (non-smartphones), pagers and smartphones (Ascom, n.d.-f). There are many benefits of mobile communication devices in a hospital environment. For example, they enable communication on the go, and they can direct information to the intended recipients instead of distributing it to the entire department. However, their current form makes it impractical to access the device in some situations, such as when performing tasks that require both hands. Furthermore, healthcare workers carry out many unclean tasks which complicates the interaction with the mobile devices. The most common transmission of pathogens is contact transmission by hand (Vårdgivarguiden, 2024), and the phone can become a source of bacteria if not appropriately sterilised (Foong et al., 2015). According to Pillet et al. (2016) mobile phones can also carry viruses, and therefore, good hand hygiene and frequent cleaning of the phone is recommended to minimise the spread. In this project, user studies will be performed to identify situations where touchless interfaces could support healthcare workers. The hospital context brings several challenges that the project needs to address. For example, hygiene routines and social acceptance need to be considered. The first chapter of this report establishes a foundation for the research and contains the research question, aim, deliverables, delimitations, and ethical considerations. 1.1 Aim This thesis aims to research and identify areas where conventional mobile com- munication devices can be complemented with alternative, touchless interfaces in time-sensitive hospital environments. These interfaces should improve efficiency for 1 1. Introduction nurses while communicating and accessing information from the device. The findings from this research will support the development of prototypes. 1.2 Deliverables The deliverables of this master thesis are the following: • Problem areas of identified situations within hospital departments that could benefit from touchless interfaces. • Requirements that support the integration of touchless interfaces within at least one of the problem areas. • Prototypes utilising touchless interfaces that meet the formulated require- ments. 1.3 Research question For this master thesis, the following research question was formulated: What should be considered when designing touchless interfaces to support healthcare workers when communicating on a mobile device? 1.4 Delimitations All user studies will be executed in Sweden; consequently, the findings may not apply to healthcare in other countries. Additionally, observations and interviews will be conducted in close vicinity to Gothenburg. The user studies will primarily take place in high-paced hospital departments due to the assumed high demands on the communication systems. Examples of such departments are intensive care and emergency units. The studied users are nurses and doctors working in these environments; no patients will be interviewed. The technologies considered for the prototypes should be currently available and established, avoiding futuristic and theoretical ideas. 1.5 Ethical considerations There are ethical aspects that need to be considered during the project. Firstly, the user studies will take place in hospitals where patients may be in vulnerable positions. Therefore, patients will only be observed from afar, and no patients will be approached during the hospital visits. The lack of patient perspectives may result in a gap in the insights received from the research, which can affect how well a final product could be integrated into a real hospital setting. Confidentiality agreements will be signed, to reassure the parties involved that no sensitive patient information will be disclosed. 2 1. Introduction Secondly, the primary studied users are nurses and doctors at hospitals, and all participants will be informed of the purpose of the study before participating. Ad- ditionally, all interviews and observations will follow GDPR, the participants will be informed that the data will be anonymised, and all potential recordings will be deleted at the end of the project. They also have the right to withdraw at any point. Since healthcare workers often work under significant time constraints, the observations will be conducted to minimally disrupt their responsibilities. Finally, should Ascom decide to implement the findings derived from this thesis and integrate them into future products, it could affect the work of healthcare workers. Therefore, the quality of care will be influenced, emphasising our responsibility not to negatively impact patient care or disadvantage any individuals in the healthcare system. 3 1. Introduction 4 2 Background This chapter provides a background to the research area. It includes a description of the company Ascom with a selection of their current products and an explanation of the research problem this thesis will tackle. Related work will also be covered in this chapter. 2.1 Ascom This project is done in collaboration with the company Ascom, which provides information and communication technologies for healthcare and mobile workflow solutions globally (Ascom, n.d.-a). Their target group is users working in highly mobile and time-sensitive environments who demand near real-time solutions. Be- sides healthcare, Ascom targets multiple types of organisations and industries, such as hospitality, retail, manufacturing and high-security establishments (Ascom, n.d.- e). This master thesis is limited to the healthcare industry, and Ascom provides both software and hardware solutions for hospital environments. Their portfolio of mo- bile devices includes smartphones, pagers and cordless phones (Ascom, n.d.-f), see Figure 2.1. Their devices have several features that make them suitable for hos- pital environments. Their latest smartphone model, Myco 4, has a tough chassis and screen, and can be cleaned and disinfected (Ascom, n.d.-b). Its battery has a hot-swap procedure, which allows swapping the battery without the smartphone shutting down. Ascom’s software products are developed to be used specifically with Ascom hard- ware, but they can also be combined with third-party devices (Ascom, n.d.-c), such as iPhones. One of their software products is called Ascom Unite, which is a work- flow orchestration platform (Ascom, n.d.-d). Ascom Unite uses events and data from source systems to orchestrate these as alerts, chats and tasks. In the software, recipients of an alert can be assigned, and the recipients can be arranged in a pri- oritisation list. If a primary recipient rejects the alert or does not reply within a predefined time limit, the alert is automatically redirected to the secondary recipi- ent. Not letting everyone get all alerts decreases the sensory load on staff, and the environment becomes calmer. 5 2. Background Figure 2.1: An Ascom smartphone, DECT phone and pager. Mobilenheter för vården [Image], by Ascom (n.d.-g)1. According to a clinical consultant at Ascom, Ascom products can be found in most Swedish hospitals. However, only two hospital organisations are working with Ascom smartphones; the other organisations are using other types of mobile devices. The most common Ascom product in Sweden is a bedside call module that patients and healthcare workers use to call for help. A typical scenario in Swedish hospitals is that a patient calls for help by pressing a button on the bedside call module, which creates an event that Ascom Unite registers. This event is treated like an alert that is sent to the predefined primary recipient. The recipient can either accept or reject the alert, and the accept-action signifies that they will go and check on the patient. An alert can only be turned off in the patient’s room by pressing a button on the bedside call module. Ascom’s existing products utilise interfaces dependent on physical contact between the user and the device. The Research and Development (R&D) department at Ascom Sweden formulated the initial track of this master thesis, and they believed there were potential benefits of integrating touchless interfaces into their products. 2.2 Research problem In recent years, the integration of smartphones into healthcare environments has brought many advantages. By using Ascom products, healthcare workers can ac- cess near real-time patient information at all times. Additionally, smartphones can work as a communication tool within the workforce, improving communication and 1https://www.ascom.com/globalassets/assets/global/gmc/healthcare-2021/phone_collection -copy.png?width=640&format=webp 6 https://www.ascom.com/globalassets/assets/global/gmc/healthcare-2021/phone_collection-copy.png?width=640&format=webp https://www.ascom.com/globalassets/assets/global/gmc/healthcare-2021/phone_collection-copy.png?width=640&format=webp 2. Background information exchange between co-workers (Fiorinelli et al., 2021). Despite the many advantages, there are disadvantages worth considering. The use of smartphones in healthcare can pose a distraction and lead to errors, compromising patient safety (Fiorinelli et al., 2021). Additionally, this limits the workers’ ability to engage and connect with their patients. Ascom stated that there are situations where the use of phones is impractical, namely when wearing protective clothing or, in other ways, being physically constrained. According to Cronin and Doherty (2018), there is a need for touchless control in hands-busy settings at hospitals, such as when holding tools or handling patients. In a situation where an alert goes off on a device using Ascom’s system and the worker is occupied with other tasks, the alert will continue to sound for a predefined time or until the worker is available to turn it off. Continuous and repetitive alerts are common in hospital settings and can result in alarm fatigue in the workers (Jones, 2014). This overwhelming feeling can make it more likely for the workers to ignore the alerts and have trouble distinguishing between them. Cronin and Doherty (2018) also found sterility as a motivation for integrating touch- less control. A study (Foong et al., 2015) examining 226 mobile phones belonging to healthcare workers during a twelve-month period found that 74% of these devices were contaminated with bacteria. A part of the bacteria was potentially pathogenic, and this finding highlights the potential risk of pathogen transmission through these devices. Pillet et al. (2016) state a strong correlation between the use of phones and the presence of viruses in hospitals. To address this issue, they recommend frequent handwashing and regular sterilisation of phones. There may be situations in hospital environments where an alternative solution to conventional phones is more suitable. A solution that enables the workers to interact with a device without needing to physically touch it. Therefore, this thesis will explore this topic and focus on examining problem areas where the phone can be complemented by a touchless interface to better fit the workers’ needs. 2.3 Related work Related work will be covered in this section, including information about touchless interfaces in both hospitals and other industries. The aim is to show how others have tackled problems similar to those previously mentioned and the opportunities that exist. 2.3.1 Touchless interfaces in healthcare The use of touchless interfaces in hospitals is not a completely unexplored area. There are situations where they are used to make tasks easier and more efficient for the workers. As stated in a literature review on touchless modalities in hospitals by de Camargo et al. (2021), the most common departments for these technologies were physical therapy, surgery and radiology. Microsoft Kinect and Leap Motion 7 2. Background were the most used touchless technologies in these departments. Kinect is a line of motion sensing input devices controlled by gestures and spoken commands that was originally released for the XBOX 360 gaming console (Hsu et al., 2017, p. 12). The Leap Motion controller by Ultraleap is a hand-tracking device that enables the user to interact with the system without the need for touchscreens or wearables (Ultraleap, n.d.-d). Both of these sensors for gesture-based interfaces have been used in surgery as a tool to interact and manipulate medical images without breaking the sterile environment (de Camargo et al., 2021). HoloLens 2 is a mixed reality headset by Microsoft that incorporates several dif- ferent touchless tracking systems (Microsoft, n.d.-a). Mixed reality is the meeting between real and virtual environments (Milgram & Kishino, 1994). HoloLens 2 features hand tracking without the need for external controllers, enabling users to manipulate holograms with natural movements. Additionally, it incorporates eye tracking technology that adjusts the holograms based on the user’s gaze direction. In situations where the hands are occupied, the headset can also be controlled with voice commands. It utilises spatial mapping to map the environment around the user and to lock holograms to physical surfaces. This headset is applied across different industries, including healthcare, and is used for activities such as remote consultation and viewing 3D images. Another touchless technology emerging in healthcare is Alexa, a cloud-based voice service developed by Amazon (Amazon, n.d.-b). According to (Espinoza-Hernández et al., 2023), Alexa is used in healthcare for various reasons, including answering questions, alerting about critical health statuses, making video calls and taking on small tasks. Additionally, mobile devices with integrated touchless interfaces are available, with Vocera being one example. Vocera are hands-free communication de- vices that help healthcare workers communicate using voice control (Stryker, 2023). It ensures that workers can quickly access help in emergencies, and it is also possible to use the devices under protective equipment to reduce contamination risks. 2.3.2 Touchless interfaces in other contexts One example of a widely adopted voice user interface in households is the previously discussed voice assistant Alexa. Alexa serves multiple purposes, including controlling lights and televisions (Amazon, n.d.-d), offering hands-free voice and video calls (Amazon, n.d.-a) and keeping users updated on the latest news (Amazon, n.d.- c). Other similar digital voice assistants are Siri by Apple and Google Assistant. Another example of a touchless interface is Google’s Camera Switches. Google allows users to control their smartphones using facial gestures (Google Help, n.d.). This is used by having the phone securely mounted and the camera directed towards the user’s face. It is possible for the user to record their own gestures to use as shortcuts for quickly accessing frequently used functions or commands. A newly released device that incorporates touchless functionalities is Ai Pin by 8 2. Background Humane (Humane, n.d.). This wearable screenless device is pinned to the user’s sweater or other clothes and is an alternative to conventional smartphones. It is controlled with the use of touch, voice and gestures and to interact with it, the user starts by touching a touchpad on the device. It can be used as a virtual assistant and can answer questions, take pictures, send messages and comment on items held in front of it. The user’s hand operates as a display using the “Laser Ink Display” on the device. Information such as the time, weather and music is projected onto the palm, and the interface can be maneuvered by tilting the hand and moving the fingers. Another device that recently became available is Apple Vision Pro, a mixed reality headset that extends the interface to the space around the user (Apple, n.d.). It is controlled by the user’s eyes, hands and voice. With Vision Pro, it is possible to view the information normally viewed on a screen in a new format while still staying aware of the world around. It can be used instead of a computer screen or smartphone to, for instance, view movies, make video calls or browse the web. The previously mentioned devices HoloLens 2 by Microsoft and Leap Motion by Ultraleap, serve multiple industries beyond just healthcare (Microsoft, n.d.-b) (Ul- traleap, n.d.-d). The headset HoloLens 2 is incorporated into manufacturing, engi- neering and construction industries to increase efficiency (Microsoft, n.d.-b). Addi- tionally, it is used in education to show complex systems. Ultraleap (n.d.-d), the creator of Leap Motion, describes the hand tracking device as universally accessible: designed for anyone and anywhere. The compact and adaptable device can be at- tached to VR headsets, computers, or other digital interfaces. The touchless software can be used in various contexts, such as museums (Ultraleap, n.d.-e), quick-service restaurants and retail (Ultraleap, n.d.-b) to enhance user experience and promote hygiene. It can also be used for training purposes to simulate a specific scenario (Ultraleap, n.d.-c) and in VR arcades (Ultraleap, n.d.-a). Another virtual reality headset with similar applications is Meta Quest Pro (Meta, n.d.). Touchless interfaces can be found in the automotive industry. BMW has a feature that lets users operate functions by making hand gestures in front of a display located under the rear mirror (BMW, 2019). For example, they can change the volume and accept a phone call. Furthermore, Jaguar has developed a feature where users can open and close the tailgate by performing a kick gesture under the rear bumpers (Jaguar, 2018). Another touchless approach in cars is voice control, and Mercedes-Benz has in a beta-version integrated ChatGPT into their voice assistant (Mercedes-Benz Group, 2022). The integration intends to expand the tasks the voice assistant can respond to and make users experience a more natural dialogue. 9 2. Background 10 3 Theory This chapter includes theory about common touchless interfaces and their advan- tages and disadvantages compared to conventional touch user interfaces. Haptic and multimodal interfaces will also be covered. Moreover, the term usability will be described. 3.1 Touchless interfaces This section exemplifies touchless interfaces. These can also be described as natural user interfaces, which enable users to intuitively and naturally interact with a system using body movements, gestures and voice (Hsu et al., 2017, p. 10). 3.1.1 Voice user interface Voice user interfaces (VUI) are interactive systems that utilise speech recognition technology to enable users to input commands and communicate with a device solely through spoken language (Hsu et al., 2017, p. 10). The system interprets the commands, making it possible to interact without using both hands and eyes (Interaction Design Foundation, 2016). A specific type of VUI is auditory interface, in which not only input but also output solely consists of sounds (Cohen et al., 2004, p. 6). The user inputs speech and the system outputs speech and other nonverbal audio such as music, background sounds and earcons. Earcons are auditory icons whose purpose is to convey specific messages. A notable advantage of VUI is the hands-free capability. In situations where the user is physically occupied, it is more practical to speak rather than interact with a screen (Pearl, 2016, pp. 3-4). Speech is also a quick and efficient way of conveying time-sensitive information. In addition, VUI:s are intuitive since they can be used by anyone capable of speaking, without the need for a complex interface. However, when considering using a VUI, it is important to be aware of the disadvantages. Using them in public spaces may not always be suitable, both considering privacy reasons and the potential for unintended disruptions (Pearl, 2016, p. 5). Concerns can arise not only for the privacy of what the users are saying but also fear regarding the disclosure of sensitive information by the system. According to Easwara Moorthy and Vu (2014), the social acceptability of using VUI varies depending on the situation and the information being exchanged. Their study 11 3. Theory investigated voice-activated personal assistance in smartphones and compared the exchange of private and non-private information in a home and a restaurant location. The findings showed that the users preferred the VUI for non-private information in a home setting. In public spaces among strangers, the users found it inappropriate to display personal information. Voice user interfaces can also be challenging to use in noisy situations since the surrounding sounds can interfere with the sounds interpreted (de Camargo et al., 2021). In these situations, subvocal recognition is an option. This system uses external sensors placed on the user’s throat to capture nerve signals when a person silently or aloud articulates words (Yonck, 2010). The technology can be utilised in noisy environments and situations requiring silence. Utilising auditory interfaces, where both input and output are managed through sound, presents its own set of challenges. The system’s output is transient, pre- venting users from accessing information at their preferred pace (Cohen et al., 2004, p. 6). Once the sound is emitted, it is no longer available for the user, compared to a screen where information can stay longer. Because of this, the output places a heavy cognitive load on the user, particularly on their memory. 3.1.2 Gesture-based interface Gesture-based interfaces rely on movements from the user as their source of input. The user interacts with the interface by moving their hands, head or other body parts and these gestures are captured and organised into a sequence of commands (Hsu et al., 2017, pp. 10-11). A specific type of gesture-based interface is touchless gesture interface, which operates without the need for physical touch. Touchscreens is an example of a gesture-based interface, but is not a touchless gesture interface. Since gestures are a communication form that humans use naturally with each other, this type of interface can be intuitive. However, this is only true as long as the gestures are simple and not too numerous. Gestures can be challenging for the user if they are complex, physically challenging or hard to remember (Vatavu, 2023). Furthermore, interacting with screenless interfaces by using gestures can result in a lack of appropriate feedback (Rise & Alsos, 2020). According to Rempel et al. (2014), gestures used in sign language for letters and numbers are distinct and easy to recall, and image capture recognises them more easily. Therefore, these can be considered in the design of gestures for HCI. Moreover, the authors claim that sign language interpreters can advise which hand gestures are comfortable due to their extensive experience. The healthcare domain is one area that could benefit from the use of gesture-based interfaces since it can ensure a sterile environment and focused attention (Wachs et al., 2018), as well as assist workers when traditional interfaces may prove insuf- ficient (Rise & Alsos, 2020). Depending on the type of gesture, the technique can be more or less socially acceptable to use. Montero et al. (2010) divided gestures into four categories to study the social acceptance of gestures based on what the user thinks about the interaction and how the observer perceives it. The categories used in the study were originally proposed by Reeves et al. (2005) to describe differ- 12 3. Theory ent approaches to designing public interfaces, and these were secretive, expressive, magical and suspenseful interactions. These categories differ based on whether the manipulation and effect of the interaction are revealed to or concealed from the ob- server, and this is illustrated in Figure 3.1. In the study by Montero et al. (2010), it was established that suspenseful gestures, where the gesture is obvious but the effect is not, were the least socially acceptable. The other three categories were equally accepted. These results show that the use of both small and hidden gestures and large and obvious gestures can be accepted, depending on whether the effect is clearly distinguishable or not to the public. Figure 3.1: A visualisation of different approaches to designing public interfaces. 3.1.3 Eye tracking Systems Eye tracking is a technology that monitors the movement and position of the user’s eyes to identify their focus and attention (Punde et al., 2017). There are two general types of eye trackers, one of which is remote eye trackers. These trackers use screens, cameras or sensors that are placed in a limited area and detect eye movements. The other type is mobile eye trackers which are placed near the eyes of the user and allow them to move around without restrictions. Using the eye as an input device is faster than the more conventional method of using a mouse to interact with computer systems (Sibert & Jacob, 2000). This approach also proves to be a practical solution in situations where the hands of the user are occupied. Despite these advantages, this technique can present challenges since eye movements often are unconscious and the system needs to differentiate between intentional and unintentional viewing (Majaranta & Bulling, 2014). If the eye tracking device is worn on the head and covers the eyes, it can also affect the 13 3. Theory healthcare worker’s ability to maintain eye contact with their patients. Eye contact is important during communication to fully understand the intention and message being conveyed (Davidhizar, 1992). The lack of eye contact can be perceived as being disinterested and uncaring. 3.2 Haptic interface Haptic interfaces mediate communication between humans and machines by touch (Hayward et al., 2004). To provide information, haptic interfaces produce mechan- ical stimuli that affect the human’s perception of touch and proprioception. Pro- prioception refers to a human’s ability to perceive body position, movement and weight. Vibrotactile sensation is a term for perceiving vibrating objects in contact with the skin. According to Schneider et al. (2017), vibrotactile feedback is the most common haptic technology and can be found in smartphones. Usually, haptic feedback is most effective in combination with other modalities (MacLean, 2000), and haptic interfaces can be helpful when other modalities are overloaded (Osvalder & Ulfvengren, 2015, p. 369). The resolution of the haptic modality is relatively low, leading to few values that can be clearly distinguished from each other (Hoggan, 2013). 3.3 Multimodal interface A system qualifies as a multimodal interface if it incorporates two or more input modalities, like voice and gestures, along with multimedia outputs (Oviatt, 2007, pp. 414-418). These interfaces have many advantages compared to single modality interfaces. They can be used by a wide range of users and in scenarios where one input mode is unavailable, allowing users to switch to alternative input methods. Furthermore, the interfaces tend to be easier to learn and use, resulting in users preferring them in various situations. 3.4 Usability The International Organization for Standardization (2018) defines usability as: “the extent to which a system, product or service can be used by specified users to achieve specified goals with effectiveness, efficiency, and satisfaction in a specified context of use” (ISO 9241-210, section 3.13). Effectiveness can be described as the degree to which the user’s goal can be achieved, and efficiency refers to how much effort is needed to achieve the goal (Jordan, 1998, pp. 5-7). Satisfaction is more subjective than the other two aspects and can be harder to measure. It includes how well the product meets the user’s needs and expectations in terms of cognitive, physical, and emotional aspects (International Organization for Standardization, 2018, ISO 9241-210, section 3.10). According to Nielsen (2012), usability refers to how easy an interface is to use, while utility refers to the functionality of the design. The two attributes combined determine the usefulness of an interface. Jordan (1998, 14 3. Theory pp. 25-38) listed ten principles of usable design that are associated with usability, and they are described below: Consistency Similar tasks within the product should be performed similarly. Compatibility How tasks are accomplished should correspond to the user’s expectations based on their experience with other products. Consideration of user resources The interaction should consider the demands placed on the user’s cognitive and physical resources. Feedback Any actions performed by the user should be acknowledged, and information about the results of the actions should be provided. Error prevention and recovery Minimise the likelihood of user errors, and if an error occurs, provide easy and quick recovery. User control Give the users as much control as possible over the actions taken by the product and the state the product is in. Visual clarity Information should be displayed for easy and fast access without causing any con- fusion. Prioritisation of functionality and information The most essential functionality and information should be easily accessible. Appropriate transfer of technology Apply appropriate technology developed in other contexts to enhance the usability of a product. Explicitness Provide cues for the product’s functionality and method of operation. 15 3. Theory 16 4 Methodology This chapter starts by explaining the concepts of Research through Design and wicked problems. Then, it describes design methods, which are organised in the stages of the iterative process of Design thinking. 4.1 Research through Design Methods and processes of design practice can be applied to produce new knowledge through an approach called Research through Design, abbreviated RtD (Zimmer- man & Forlizzi, 2014). While commercial design practice aims to make a successful commercial product, a valuable outcome of RtD can be a more comprehensive un- derstanding of a challenging problem. Furthermore, RtD is more well-documented than design practice. Except for RtD, there is also Research for design and Research into design, and it was Frayling (1993) who suggested the three categories of design as research: Research into art and design, Research for art and design, and Research through art and design. He argued that research is essential for the teaching and practice of design and art. Research for design aims to improve the practice of de- sign, while Research into design includes research on the human activity of design (Zimmerman & Forlizzi, 2014). 4.2 Wicked problems According to Churchman (1967), Rittel described wicked problems as "class of social system problems which are ill-formulated, where the information is confusing, where there are many clients and decision makers with conflicting values, and where the ramifications in the whole system are thoroughly confusing” (p. B-141). Rittel and Webber (1973) compared wicked problems to problems in natural sciences, which they call tame problems. Unlike wicked problems, tame problems are definable, have an enumerable set of solutions, and the correctness of the solution is assessable. They also listed the properties of a wicked problem, including that it is unique, can be described first after solutions are developed, has no stopping rule, and has no true or false answers. Buchanan (1992) argues that designers have to handle wicked problems since they conceive and plan what has not yet been created. Moreover, he suggests a non-linear design thinking model as an appropriate approach to wicked problems due to the lack of definitive conditions and limits. 17 4. Methodology 4.3 Design thinking Design thinking is a human-centred and iterative process used to gain insights into user experiences, define problems and discover opportunities that can lead to inno- vative solutions (Interaction Design Foundation, 2016). Stanford University’s Hasso Plattner Institute of Design suggests the following five stages for a design thinking process: Empathise, Define, Ideate, Prototype and Test. See Figure 4.1 for an illus- tration of the design process. These stages are not always done in a straightforward sequence but instead partially executed simultaneously and repeated as the need arises. This section explains the five stages and methods that can be conducted during each of them. Figure 4.1: Illustration of a Design thinking process. 4.3.1 Empathise During the Empathise stage of the design process, the goal is to understand the users and their decisions, needs and thoughts within the design context (Hasso Plattner Institute of Design, n.d.). To be able to solve a design problem, it is important to empathise with the intended user and not focus on one’s own opinions. Interviews, focus groups and observations can be performed to enhance the understanding of the users. Before conducting user studies, the participants should be informed about their rights, the purpose of the data gathering, and how the collected data will be used and stored (Sharp et al., 2019, p. 262). In addition to these methods that involve users, data can be gathered from literature and by analysing the current market. Pilot study A pilot study can be carried out to ascertain that the planned method is applicable before starting the main study (Sharp et al., 2019, p. 265). A pilot study serves as a small test run of the main study, where potential issues can be found and adjusted beforehand. Participants in the pilot study should not participate in the main study since their knowledge can distort the results. Interviews One common data gathering method is interviews, which can be categorised into three types: unstructured, semi-structured and structured (Sharp et al., 2019, pp. 18 4. Methodology 268-304). What an interview is classified as depends on how much control the in- terviewer has over the interview process. The most controlled type is structured interviews, where the interviewer follows a preplanned set of questions. Structured interviews often consist of closed-ended questions, which implies that the intervie- wee chooses an answer from a predetermined set of alternatives. On the other end of the spectrum, there are unstructured interviews. These are more similar to a conversation and explore a subject more in-depth. For unstructured interviews, open-ended questions are suitable which have no specific format or predetermined answers. Semi-structured interviews are a combination of structured and unstruc- tured interviews. The interviewee should follow a guide with preplanned questions to ensure the same topics are included in all interviews. Interviews can also be conducted in groups, and one form of group interview is focus group (Sharp et al., 2019, pp. 271-272). Usually, three to ten people participate, and the selection of participants should reflect the target group. The session is guided by a facilitator, who should follow a predetermined agenda. Besides guiding the discussion, the facilitator is responsible for letting every participant have their say. If the focus group is successful, the participants can help each other recall situations, and hence a participant can sometimes share more experiences than in a one-to-one interview (Unger & Chandler, 2012). However, the group setting can also lead to exaggerated statements. Observations Observations in the intended context are useful for gathering data that might be dif- ficult to obtain from conversations with the user (Hasso Plattner Institute of Design, n.d.). For example, explaining accurately how you perform a task can be challeng- ing (Sharp et al., 2019, p. 288). There are several approaches to conducting an observation. An observation can either be direct or indirect, depending on whether the investigator is the one watching and recording the activities on the spot or not (Bernard, 2011). Observations can take place in the field where the users perform their daily tasks, or they can take place in a controlled environment where the users perform specified tasks (Sharp et al., 2019, pp. 287-289). Before conducting an ob- servation, it is recommended to make a framework for structuring and focusing the observation. Literature reviews In a literature review, information from published sources is gathered to lay a founda- tion for the current project (Hanington & Martin, 2019, pp. 148-149). The selected sources should be relevant to the project, and they can be organised by topic to structure the review. Benchmarking Benchmarking is a process of evaluating and comparing the achievements of a com- pany’s products and work with those of its competitors (Fridley et al., 1997). A benchmark helps the company to make knowledgeable decisions. The process starts by determining what to benchmark and selecting comparative companies. Data will 19 4. Methodology then be collected from the companies and later analysed to understand the differ- ences and opportunities. The findings can later be implemented in the company. 4.3.2 Define The aim of the Define stage is to draw conclusions from the data gathered in the Empathise stage (Hasso Plattner Institute of Design, n.d.). The purpose is to clarify and define the scope of design possibilities and limitations, setting the direction for the following work and making it possible to tackle a meaningful challenge. The design space is defined by finding patterns in the gathered data and drawing con- clusions. In this section, the methods affinity diagram, Hierarchical Task Analysis, requirements, persona, and scenario will be presented. Affinity diagram The purpose of an affinity diagram is to examine data and identify themes (Sharp et al., 2019, p. 324) and to help designers base their solutions on data (Hanington & Martin, 2019, p. 12). The building blocks of an affinity diagram are insights written on notes (Sharp et al., 2019, p. 324). Each note is compared to the others to find similarities, and notes with a common theme are clustered. Affinity diagrams can also be used to analyse data from usability testing, and by colour-coding each participant, recurring problems can be identified (Hanington & Martin, 2019, p. 12). Hierarchical Task Analysis To thoroughly describe how a task is performed, a Hierarchical Task Analysis (HTA) can be conducted (Stanton et al., 2017, pp. 40-44). An HTA is based on collected data regarding the task, including data about the steps, the human-machine inter- action and how decisions are made. The elements of a HTA are organised hierar- chically, and the overall goal of the task should be placed at the top. This goal is decomposed into several subgoals, which in turn can be broken down into meaning- ful subgoals. The subgoals at the bottom have associated operations, which are the actions the agent needs to perform to reach the subgoals. The HTA also contains plans that describe how the subgoals are reached. For example, a plan can say if a step depends on another step, in which order the steps should be performed or if a step should be repeated until a criterion is reached. One disadvantage of the method is that it requires much time and effort for large and complex tasks. Requirements A requirement specifies the functionality or performance of an intended product, and capturing requirements is important for defining the product (Sharp et al., 2019, pp. 385-417 & p. 19). The formulation of requirements can be based on the results from data gathering methods, such as interviews, observations and questionnaires. Requirements can be categorised into different types, and below are six types of requirements: • Functional requirements - Specify what the product will do. • Data requirements - Specify characteristics of the required data that the prod- uct will handle. 20 4. Methodology • Environmental requirements - Specify the operational environment in terms of physical, social, organisational and technical aspects. • User characteristics - Specify the attributes of the user group. • Usability goals - Specify usability criteria regarding efficiency, effectiveness, safety, utility, learnability and memorability. • User experience goals - Specify the nature of the user experience. Persona & scenario Two methods often used together that help to bring requirements to life are persona and scenario (Sharp et al., 2019, p. 403). Personas describe typical users of the developed product extensively and help to communicate user characteristics and goals within the design team (Cooper, 1999, as cited in Sharp et al., 2019, p. 403). Personas are based on real users who participated in the data gathering and describe the user’s behaviour, attitudes, activities and environment. Additionally, it includes the user’s goals relating to the design inquiry. A persona is seldom longer than one page, including a name and an image of the person (Hanington & Martin, 2019, p. 170). Carroll (1997) defines scenarios as narrative descriptions of what people do and experience when using the product under development. A scenario is built around a user’s perspective and includes descriptions of the actions taken, the intention of these actions and the outcomes in terms of the user’s motivations and expectations. Scenarios concretise the use and, thus, help developers to create results that support human activities. 4.3.3 Ideate After defining the design space in the previous stage, the next stage is Ideate. The goal of this stage is to generate solutions for the identified problems by combining rational and imaginary ideas (Hasso Plattner Institute of Design, n.d.). Evaluating the ideas should be avoided during the ideation process to not hinder creativity and imagination. The purpose is not to immediately pinpoint an optimal solution but rather to generate a wide range of ideas that later, through user testing, can result in a suitable solution. There are several different methods for ideation, including brainstorming, brainwriting, braindrawing, SCAMPER and Six thinking hats. Brainstorming, brainwriting & braindrawing According to Wilson (2013), brainstorming is a method that can be used for many purposes, for example, to generate ideas, to find solutions to problems and to explore new design spaces. In brainstorming, participants shout out ideas on a predefined topic as quickly as possible. Quantity is prioritised higher than quality, and many generated ideas characterise a successful session. Wilson also mentions the methods brainwiriting and braindrawing as alternatives to brainstorming. In brainwriting, participants write down their ideas within a time limit and then pass them to another participant, who should continue elaborating on them (Brahm & Kleiner, 1996). Braindrawing also includes participants passing their ideas to each other, but the 21 4. Methodology ideas are sketched instead of written down (Wilson, 2013). SCAMPER Ideas from earlier ideation sessions can be further developed using SCAMPER (Wik- berg Nilsson et al., 2017, pp. 132-133). In this method, the same questions are asked for each idea, and Dam and Teo (2024) describe the questions as follows: • Substitute - What can be substituted in the idea? • Combine - What can be combined to enhance synergy? • Adapt - What parts of the idea can be adapted to solve the problem? • Modify, Magnify, Minify - What in the idea can be modified or emphasised to a lesser or greater extent? • Put to another use - How can the idea be used elsewhere? • Eliminate - What can be eliminated or reduced in the idea? • Rearrange - How can the idea be reordered or reversed? There are guiding subquestions available for each question to facilitate the process. Six thinking hats Six thinking hats is a method that helps a team to separate thinking into six distinct functions (The de Bono Group, n.d.). The functions are represented by one hat each that is mentally worn, and de Bono (2017) describes the hats as follows: • The White Hat focuses on objective facts. • The Black Hat highlights the weaknesses of an idea. • The Yellow Hat brings up the positive aspects of an idea. • The Red Hat provides an emotional perspective. • The Green Hat is creative and proposes new ideas. • The Blue Hat ensures that the method is carried out right. This method helps to systematically reflect on issues, decisions and opportunities (The de Bono Group, n.d.). Furthermore, it can be conducted to generate more and better ideas. 4.3.4 Prototype After ideation, one typically moves on to the Prototype stage. In the early stages, the prototypes should be quickly created, and later on in the process, they should be more detailed (Hasso Plattner Institute of Design, n.d.). Sharp et al. (2019, pp. 422-428) describe prototypes as manifestations of ideas. They allow designers to communicate their ideas, both within the design team and to other stakeholders, and to explore their suitability. Prototypes can be discussed in terms of fidelity, where the higher fidelity a prototype has, the closer it is to the final product. Low-fidelity prototyping To explore ideas early in the development, low-fidelity prototypes are suitable since they are simple, quick and cheap to produce and modify (Sharp et al., 2019, pp. 426- 428). Low-fidelity prototypes do not have the same functionality and look as the final 22 4. Methodology product. One type of low-fidelity prototyping is storyboarding. In a storyboard, a series of sketches or scenes are used to show how a user could perform a task using the product. Wizard of Oz is another low-fidelity prototyping method that is useful for software-based prototypes. In Wizard of Oz, the software-dependent response is simulated by a human. High-fidelity prototyping Compared to low-fidelity prototypes, high-fidelity prototypes have more functional- ity and a closer visual resemblance to the final product (Sharp et al., 2019, pp. 428- 429). Existing hardware and software components can be integrated to create high- fidelity prototypes. A high-fidelity prototype can also exist in digital form, for example, as a computer-aided design model (Hanington & Martin, 2019, p. 176). Users can evaluate high-fidelity prototypes to provide feedback on aesthetics, form, interaction and usability. 4.3.5 Test The Test stage is executed after defining a problem and creating prototypes by letting stakeholders interact and give input on the developed product or concept (Hasso Plattner Institute of Design, n.d.). The purpose is to find opportunities for improvement and to refine the solutions, but also to learn more about the user. In this section, the methods of Pugh matrix, opportunistic evaluation, heuristic evalua- tion and usability testing are described. Data gathering methods, such as interviews, observations, and questionnaires, can also be used for evaluation purposes (Sharp et al., 2019, p. 520). Pugh matrix To compare design ideas and evaluate which best meets the defined criteria, a Pugh matrix can be utilised (Cervone, 2009). This method ensures that the opinions remain objective and that the result stays consistent. The process starts by listing design ideas in the matrix’s first column and the criteria across the matrix’s first row. One idea is selected to be the baseline, and all other ideas will be compared against it. Based on the comparisons to the baseline a “+” (better), “-” (worse), or “S” (same) is recorded against each criteria. The total score for each idea is then calculated by adding up the pluses and minuses, and the higher the score the better the result. The ideas with the worst results will be eliminated, resulting in one or a few optimal ideas. Opportunistic evaluation In the early stages of the design process, opportunistic evaluation can be executed in order to receive knowledge and input on a design idea (Sharp et al., 2019, p. 507). These evaluations are often informal and done by asking questions to the users about their opinions and receiving their feedback. By conducting this type of early evaluation, it is still possible to easily improve the design before investing too much time. Additionally, it clarifies if it is worth proceeding with an idea and creating prototypes with higher fidelity. 23 4. Methodology Heuristic evaluation Heuristic evaluation is an inspection method that aims to identify problems in an interface (Sharp et al., 2019, pp. 549-552). It can be conducted together with an expert, which is someone who possesses knowledge about interaction design and the users’ needs and behaviour. In a heuristic evaluation, an interface is evaluated against heuristics, which is a set of guidelines, and the evaluators list design aspects that do not follow these (Moran & Gordon, 2023). Which set of heuristics to use depends on what is to be assessed. People tend to identify different problems, and thus, it is beneficial to involve several evaluators (Nielsen, 1994). A heuristic evaluation can be made early in the development (Nielsen & Molich, 1990). Usability testing Usability testing is a method for identifying issues and possible improvements in a design and understanding the intended user’s preferences (Moran, 2019). The user performs predefined tasks, and performance measures are collected, such as the time it takes to complete a task (Sharp et al., 2019, pp. 517 & 524-525). “Think aloud” is a common approach during usability testing, which implies that the participants are encouraged to say what they are thinking and doing out loud. To gather information on the user’s impression of the interaction, a user satisfaction questionnaire and interview can follow. Similar to user studies, the participants should be informed before the test starts about how the test will be conducted and how the data will be collected and used. Moreover, they should be informed about their rights, and one common right is the right to withdraw from the study. 24 5 Planning This project starts in mid-January and will proceed until early June, and its Gantt schedule can be seen in Figure 5.1. The problem this project addresses can be considered wicked since its formulation is not definitive and will be more defined as the project progresses. It will be tackled by conducting an iterative Design thinking process, and the intention is to carry out the five stages described in Section 4.3: Empathise, Define, Ideate, Prototype and Test. Meetings are planned every other week with the supervisor and every week with the mentors at Ascom. These meetings are an opportunity to receive continuous feedback and to discuss how the work could proceed. The first four weeks of the project are primarily dedicated to writing the planning report. Simultaneously, a pre-study will be conducted, including a literature review of touchless interfaces and a benchmarking of products using touchless interfaces. The Empathise stage will start in the middle of February and continue until the middle of March. This stage will be emphasised greatly due to the complex use context involving several actors performing critical tasks. Moreover, the users are experts with knowledge and experiences that people in general don’t possess. Several hospital departments will be visited to gain an understanding of this unfamiliar area, and the focus of the visits is to study the interaction between healthcare workers and communication devices. The Define stage will start with structuring all gathered data to identify problem ar- eas that could benefit from touchless interfaces. In mid-March, the plan is to choose one of the problem areas to centre the subsequent stages around. Requirements for this problem area will be written and complemented with a persona and a scenario to concretise the user group and situation. When the problem area is outlined, the Ideate stage can start by performing one or more ideation methods. The ideas need to be evaluated and reduced in number before proceeding to the Prototype stage, which will start at the beginning of April. The selected ideas will influence the se- lection of prototyping methods, although they will likely be low-fidelity prototyping methods because of the time constraint. This stage will be shortly followed by the Test stage to receive early feedback on the prototypes. Due to the iterative nature of the design process, several methods will be revisited during the project. For example, complementary data gathering will be needed to fill gaps of knowledge discovered later in the project, and insights from evaluations will support the making of new, better prototypes. The intention is to write the 25 5. Planning master thesis report continuously during the project. However, in May, the main focus will be completing the report. The goal is to finalise the master thesis in June. Figure 5.1: The project represented in a Gantt chart. 26 6 Execution This chapter chronologically presents the execution of the thesis project and starts by describing the pre-study. Subsequently, it covers the design process, which followed the stages of Design thinking. For every stage, the execution and results of the applied methods are described. Since Design thinking is an iterative process, some stages appear more than once. See Figure 6.1 for an illustration of the process. Figure 6.1: Illustration of the process. 6.1 Pre-study The pre-study consisted of a literature review and a benchmarking. It was an opportunity to gather knowledge about touchless interfaces and how they are utilised today, Ascom’s product portfolio and previous work related to this project. 6.1.1 Literature review A literature review was conducted on touchless, haptic and multimodal interfaces, which provided knowledge of what they imply, their benefits and drawbacks, and the social aspects when using them. The studied touchless interfaces included eye tracking, voice user interfaces and gesture-based interfaces. These findings were supposed to support the upcoming solution development and evaluation, and they can be found in Section 3. In addition, literature on touchless interfaces in healthcare was studied, and these findings are presented in Section 2.3.1. This review aimed to gain insight into what has been investigated prior to this thesis and what conclusions others have drawn. 27 6. Execution 6.1.2 Benchmarking To gain inspiration on how touchless technologies can be applied, a benchmarking of products utilising this was carried out. The benchmarking was not limited to solutions within healthcare but also those utilised in other industries, such as the automotive industry. Examples of studied products include Vocera, Humane Ai Pin, Leap Motion and HoloLens 2. The findings from the benchmarking can be seen in section 2.3. Familiarising ourselves with Ascom’s products was also crucial for this project. A clinical consultant at Ascom held a demo of their product portfolio, showcasing some of their hardware and software solutions. The primary focus was to demonstrate how Unite, the system where alert chains can be created and executed, worked. In the demo, the alerts were sent to the users’ smartphones, where they could decide whether to reject or confirm an alert. By attending this demo, an understanding was gained of how the products are supposed to be used. This knowledge was important since the intended use does not always correspond with how users actually use them. 6.2 Empathise During this stage, user studies were performed to investigate whether healthcare could benefit from touchless interfaces and, in that case, in which situations. In addition, the user studies aimed to collect data on what to consider when integrating touchless interfaces in a hospital environment. Firstly, a pilot study was conducted, followed by interviews and observations at hospital visits and an interview with a nurse. A focus group was not conducted because it would have been challenging for the visited departments to spare several healthcare workers simultaneously without affecting patient safety. Another alternative would have been to find healthcare people willing to participate in the study in their spare time, which would also have been difficult. 6.2.1 Pilot study Before the visits to the hospital departments were conducted, an observation guide and interview questions were prepared. A pilot study including two interviews was conducted to assess the suitability of the interview questions. Two recently gradu- ated doctors were interviewed over the phone, one at a time. Before the interviews, they were informed that the gathered data would be used for this master’s the- sis and that they could withdraw from the interviews. All the interview questions were asked, and the answers were written down along with reflections about the questions. The most significant finding of this study was that the alert and com- munication setup at hospitals often differ from the setup Ascom explained during the demo. In the demo, alerts were managed through an individual alert system on smartphones. The participants said that Ascom’s DECT phones are used for almost all communication, and that was also common in their previous occupations at other hospitals. This information prepared us for the fact that many hospitals 28 6. Execution may not have transitioned to newer technology. 6.2.2 Interviews & observations Five hospital departments were visited during this stage. The mentors at Ascom proposed intensive care, emergency, infection, and neonatal departments as potential interesting places to visit due to their high pace and need for sterility. Departments were contacted by mailing contact persons on hospitals’ information pages, which resulted in three visits to different emergency departments and one to a neurolog- ical intensive care unit. After the first visits, we realised it would be interesting to visit one more department with inpatients. Therefore, a medical department was contacted via Ascom. This department was also interesting since they used smartphones for handling alerts, which differed from the other departments that used DECT phones and collective alert systems. In addition to the hospital visits, a nurse who worked at another medical department was interviewed. Before the visits, an information sheet was emailed to the contact persons to be handed out to the personnel. It included the title of the project, the research ques- tion, the goal of the project, the purpose of the visits, and what their participation implied. Four out of five visits began with changing into scrubs (medical uniforms) to blend in better during any patient meetings. During the fifth visit, no patient meetings were attended; thus, no scrubs were needed. At the hospital departments, direct observations were made of how the communication and alert systems were set up and how the healthcare workers interacted with different devices. Since all the interviewees were working, most interview questions were asked on the go when appropriate. Observations and interview answers were recorded in a notebook, and the notes were transcribed to a text document after the visits. The hospitals have photo restrictions, and hence, no photos were taken. Below is a summary of a more extensive logbook containing insights gathered from each visit and the interview with the nurse. Emergency department A visit to an emergency department was conducted to assess their use of and de- mand for communication devices. During this visit, interviews were held with one practical nurse responsible for managing external and internal communication, two doctors specialised in emergency care and two nurses. Explanations of the different professions can be found in the Glossary. Additionally, the practical nurse and one of the nurses were observed during their work, and a general observation of the department was conducted. The department was equipped with an alert system linked to each patient room, with the alert information displayed in the corridors. See Figure 6.2 for an illustration of a hospital corridor. The nurses and the practical nurses collectively managed these alerts. Due to this system, the work environment was noisy, and continuous beeping sounds were in the corridors. Despite this, several workers preferred collective responsibility over a system involving individual alert units. They believed they had clear procedures for managing the alerts and found the information easily accessible. Only workers with specific leadership responsibili- 29 6. Execution ties and doctors carried DECT phones during their shifts; the rest did not have their own units. Since the doctors’ expertise is sometimes needed in other departments, it was highly prioritised to answer calls in all situations. They described scenarios in which they were unable to answer their phone, needing to either cancel their current task to answer or rely on someone else to respond on their behalf. During point-of-care tasks, answering the phone was often inconvenient. In such instances, someone would respond only to verify the urgency of the matter, and if deemed non-critical, they would end the call. Figure 6.2: An illustration of a hospital corridor with an alert system. Pediatric emergency department During the visit to a pediatric department, interviews were held with two speciality residents, one nurse with extra responsibilities and one doctor with extra respon- sibilities. A general observation of the work environment was carried out, which focused on shadowing the interviewed nurse throughout her work assignments. The department primarily used DECT phones, and its different teams had one phone each. All doctors and workers with specific leading roles were equipped with at least one phone at all times. Similarly to the previously visited emergency department, alerts of different severity levels were managed through a system in the hospital corridors. Information about the type of alert and room number was shown on wall and ceiling displays. The interviewees expressed that they had become accustomed to the frequent sounds in the corridor and seldom reflected on it unless it was an emergency. The interviews and observations revealed that there was a need to com- municate via devices simultaneously while performing two-handed tasks. Several of these tasks involved assisting patients, but also while using a computer, and man- aging medications. One of the speciality residents had requested a voice-controlled device worn around the neck for such situations but had not received it. Gynaecology emergency department A gynaecology emergency department was also visited. During this visit, multiple nurses and practical nurses were shadowed and asked questions. One longer inter- view was held with one of the nurses. This department utilised alert and phone 30 6. Execution call systems similar to those in the previously examined emergency departments. Many patient meetings in this department touch on sensitive topics; hence, using phones during these meetings was avoided whenever possible. Occasionally, phones were left outside patient rooms to avoid disturbances, and nurses often answered calls on behalf of the doctors during examinations. Each worker in the department had a personal safety alarm that was supposed to be worn on the clothes and used in threatening situations. However, most workers did not frequently wear these wearable devices. They often forgot it or did not feel the need to wear it since the threatening situations were very rare. Neurological intensive care unit A visit was made to a neurological intensive care unit, where the workers and patients had unique needs compared to the previously observed emergency departments. The visit involved conducting observations in three rooms, where nurses and practical nurses monitored and assisted the patients at all times. Collectively, six nurses were observed. Furthermore, interviews were held with a practical nurse, a nurse, a nurse with leading responsibilities and an anesthesiology resident. Similar to the previous observations, this unit had alert systems in the corridors. These alerts sounded in all rooms, including the patient rooms. Since the patients admitted to this unit need intensive care, they are monitored intensely. The monitoring of the patient’s vitals and medication infusion resulted in frequent alert sounds. The interviewed practical nurse pointed out that the sounds could disturb the patients, especially since they could experience intense headaches. Each room had a DECT phone, for which the room team was collectively responsible. All doctors and other workers with special responsibilities had their own phones. Following hygiene routines was important in the neurological intensive care unit. Therefore, they frequently used gloves and aprons when assisting the patients. The use of these protective equipment affected their use of communication and monitor- ing devices. During an observation, one nurse received a phone call while wearing gloves. To answer, they first had to interrupt what they were doing, then remove one glove, pick up the phone and cradle it between the shoulder and ear. At the same time as they answered the call, they removed the remaining glove. Several instances of touching displays to turn off monitoring alerts with gloves on were also observed, even though it does not follow the routines. The inconvenience of managing alerts and phone calls when wearing protective clothes resulted in the workers not always adhering to hygiene routines, potentially increasing the risk of transmission of pathogens between patients. Medical department The final observation of the initial user studies was conducted in a medical depart- ment. Interviews were held with one nurse and one practical nurse; beyond this, three additional nurses and three practical nurses were asked questions for further insights. Each nurse and practical nurse carried their own Ascom smartphone to manage alerts and phone calls and an additional smartphone that monitored the patient’s vitals. Furthermore, they had displays in the corridors and lights outside 31 6. Execution each room indicating where an alert had been triggered. Unlike all previously visited departments, the alerts in the corridors were silent. The only sound caused by an alert originated from the workers’ smartphones, resulting in a quieter environment. It was clear that the workers did not use the smartphone’s alert system as intended. Instead of accepting an alert on the phone, they only viewed the information on the phone or in the corridor and proceeded to the patient room, where they turned it off. Similarly, when they could not take on an alert, they did not use the reject option on the smartphone, causing the phone to continue sounding until someone else deactivated it at the source. The practical nurse expressed that they did not fully understand the alert app and were unsure about the outcomes of using these functions. Others had not formed the habit of using them but thought they should start since it would ease the work for themselves and their colleagues. Several workers also described situations where handling the phone was inconvenient, such as wearing gloves and assisting patients. In these situations, the continuous sound of the phone could disturb the meeting with the patient. The workers ex- pressed how patients sometimes commented: “Now they want to reach you” and “You have a lot to do”. All interviewed workers expressed that they thought the smartphone was too heavy and big and desired a smaller version. When they were asked about their opinions of carrying an additional device with touchless features, they were not enthusiastic. This department had previously used an earlier version of an Ascom smartphone that had a small additional display on the top for easy information access. They all missed this version and explained that the two main functions needed on the phone were to manage alerts and phone calls and that other features were often redundant. Interview with nurse A small interview was held with a nurse who shared their experience with commu- nication devices in their department. They worked in a medical department where the nurses had their own phones. Each nurse was responsible for six to seven pa- tients, and all calls regarding their patients were directed to them. According to the nurse, incoming phone calls could disturb their work, for example, when they cannot answer a call and the phone keeps ringing. This type of situation can happen when they have occupied hands or unclean gloves, such as performing a jaw thrust, inserting a venous catheter, and administrating an injection. During the evening, night and weekend shifts, one of the nurses needed to carry the department phone in addition to their own phone. Since phone calls often interfere, the nurses argue about who should take the department phone. The department utilised an alert system in the corridor, and the nurse did not find this system disturbing. However, they said that reaching the assistance button sometimes can be troublesome when they have to be close to the patient. 6.3 Define The purpose of this stage was to organise, understand and summarise the findings from the pre-study and the Empathise stage. It started by making an affinity dia- 32 6. Execution gram, whose results were used to write problem areas and requirements. Thereafter, two personas and scenarios were created. 6.3.1 Affinity diagram All the gathered data was categorised in an affinity diagram to identify problem areas that could benefit from touchless interfaces and systematically map the users’ needs. The affinity diagram included data from the visits, the interview and the literature review, assigning each source a unique colour to track the source of an insight. After each visit, the insights were written on notes in Figma and organised in preliminary categories together with the data from previous visits. When the last insights were assigned a category, the content of each category was revisited. Several iterations of recategorising resulted in the final affinity diagram in Figure 6.3, which includes the following categories: • Challenges - This category included identified challenges that might benefit from integrating touchless interfaces. The challenges were divided into sub- categories, such as Interference with hygiene routines and Hands-occupied in point-of-care. Examples of insights: – “Hard to reach the assistance button when needing to be close to the patient” – “Sometimes they let the phone ring if they are wearing gloves and call back later.” • User experience - Several subjects were included in this category, such as Social aspects, Usability and Work environment. Examples of insights: – “You want to touch the mobile phone as little as possible.” – “Healthcare workers believe that patients sometimes feel guilty when the phone makes sounds.” • Physical aspects - In this category, insights about how the physical design of the current devices affects healthcare workers are gathered. Moreover, it contains insights related to workwear and personal protective equipment. Ex- amples of insights: – “The aprons cover all pockets.” – “Heavy objects in the pockets of the shirt cause the shirt to move forward when bending over.” • Alarm - This category gathered insights regarding different types of alerts and alarms, including alert systems, monitor systems and emergency alarms. The insights covered needs, attitudes and functionalities related to the various kinds of alerts. Examples of insights: – “The healthcare workers do not carry the personal safety alarm because it is seldom used.” – “Due to the many alerts in the corridor, some are filtered out, increasing the risk of important alerts being overlooked.” • Communication - The insights of this category were related to communication at the hospital departments, which mainly occurred over phone calls or by looking up someone. Example of insights: 33 6. Execution – “To know who is calling, you have to check the screen and usually also answer.” – “Calls to the doctor’s phone always need to be answered.” Figure 6.3: The final affinity diagram. 6.3.2 Problem areas The problem areas were formulated by summarising and dividing the key findings from the research. Most of these originated from the category Challenges in the affinity diagram. In total, 12 problem areas were written, and they all had a similar structure, including a background to the problem, the problem itself, and its effect. The content of the problem areas is also alike, and only some factors differentiate them from one another. These factors result in different circumstances, so it was important to distinguish them. For example, when a healthcare worker follows hygiene routines, their hands can be free to interact with a device as long as they do not touch anything, compared to when their hands are occupied during two-handed tasks. Moreover, the procedures for handling calls differ from those for handling alerts. The titles of all problem areas are listed below, and their explanations can be found in section 7.1. • P1. Following hygiene routines while responding to alerts • P2. Following hygiene routines while managing incoming calls • P3. Following hygiene routines while accessing information • P4. Following hygiene routines while managing telemetry 34 6. Execution • P5. Responding to alerts during two-handed, point-of-care tasks • P6. Managing incoming calls during two-handed, point-of-care tasks • P7. Access information during two-handed, point-of-care tasks • P8. Initiate calls during two-handed, point-of-care tasks • P9. Call for assistance while physically restrained • P10. Managing calls during two-handed, non-point-of-care tasks • P11. Workflow deviations when confirming alerts • P12. Workflow deviations when rejecting alerts The initial intention was to choose one problem area to focus on for the upcoming work. However, since many of the problem areas were related to each other, it felt overly restrictive to narrow the focus to just one. The decision was then made to exclude the problem areas P4 and P10. P4 involved telemetry, which Ascom is not a provider of and was therefore out of the scope. The choice to exclude P10 was based on the recognition that addressing the problems in the other problem areas would also cover those in P10. Furthermore, it was decided to narrow the target group to nurses and practical nurses, excluding doctors from the main focus. Nurses and practical nurses face similar problems regarding calls as doctors but also face additional challenges with alerts. Therefore, solving the nurses’ and practical nurses’ problems will inherently solve the doctors’ problems as well. 6.3.3 Requirements The formulation of the requirements began by reviewing the data organised in the affinity diagram. Each note in the affinity diagram was reviewed to ensure all im- portant information relevant to the selected problem areas was included. The re- quirements were then categorised into different types. As an initial attempt, the requirements were divided into the types suggested in Section 4.3.2 in the method- ology chapter. However, new types were created after realising they were not the most suitable for this project, and the new types are: • Hardware • Social aspects • General functionalities • Alert functionalities • Call functionalities • Usability These requirements laid the foundation for a new requirement recommending what types of touchless interaction should be utilised in the solution. It was decided that voice user interfaces and gesture-based interfaces should be recommended, not eye tracking systems. Afterwards, the two new types listed below were added, and these included requirements regarding what to consider when utilising voice user interfaces and gesture-based interfaces for communication and information devices in a hospital environment. • Voice user interface 35 6. Execution • Gesture-based interface Eye tracking was not recommended because it conflicted with requirements regarding comfort and the patient’s perception of healthcare workers’ facial expressions. A more extensive explanation for why eye tracking was excluded can be found in section 7.2, together with all the requirements. 6.3.4 Persona & scenario Two personas and scenarios were crafted as the final step of the Define stage. These were derived from the information gathered on the healthcare workers who partici- pated in the interviews and observations. Since the chosen focus was on nurses and practical nurses, one persona and scenario were created for each role. The aim was to aid the upcoming Ideate stage and to ensure a similar understanding of the users and their behaviours, needs, and challenges was achieved. Persona 1 Name: Emma Age: 35 Occupation: Nurse at a medical department Emma has worked as a nurse for ten years and has been employed at the medical department for four years. Her daily tasks include assessing patients’ health con- ditions, administering medications, performing medical procedures and speaking to patients’ relatives on the phone. In her scrub pockets, the smartphone is accom- panied by a few pens, a watch, several cheat sheets and an ID card. Every day, Emma strives to balance being present and engaged during patient meetings while remaining available for phone calls. She often feels frustrated over the constant interrupting phone calls, forcing her to start over with her tasks repeatedly. Scenario 1 Emma walks towards Room 3 in the long hospital corridor. She feels how the weight of everything in her pockets is dragging her shirt down, and she pulls it up to its correct position. Today, she works the evening shift and is responsible for yet another phone along with her assigned work phone. Emma arrives at the room and greets her patient, who is lying in bed. While telling the patient she will set an IV with a new medication, she disinfects her hands and puts on gloves. As she is about to insert the needle in the patient’s skin, one of the phones starts to ring in her pocket. She sighs quietly, puts the needle down, apologises to the patient, and quickly removes one glove before reaching for the correct phone in her pocket. She 36 6. Execution fumbles for a second, then picks up the phone and cradles it between her shoulder and ear as she answers. With her bare hand, she proceeds to remove the other glove. On the other end of the line is a worried son of one of their other admitted patients, wondering about his father’s current state of health. She excuses herself to the patient and walks out of the room to take the call. Right as the man is expressing his concerns, an alert starts beeping in her ear, and Emma pulls away from the sudden sound and quickly grabs the phone before it falls from her shoulder to the ground. She asks the man to repeat what he said and then ensures him that his father is stable. After the call, she notices someone else has managed the alert and returns to her patient. She repeats the disinfection of her hands and finally sets the IV. Persona 2 Name: Anna-Maria Age: 50 Occupation: Practical nurse at a medical depart- ment One year ago, Anna-Maria started working at the medical department. She has worked as a practical nurse for 30 years and has experienced many different de- partments throughout the years. During a workday, she assists the patients with their daily needs, such as personal hygiene and meals, and performs some medical procedures. For Anna-Maria, it is important that her patients feel at home and seen during their hospital stay, even on stressful days. Here at the medical department, it is her first time using her own device to manage alerts. She appreciates the quieter environment the devices have resulted in but has not fully built a habit of using them as intended. Scenario 2 “Beep beep beep”, Anna-Maria hears while feeling the familiar vibration in her pocket. She looks up towards the ceiling, sees that the patient in room 7 is calling and starts walking towards the room. Actually, she should pick up her smartphone and confirm the alert to inform that she is responding to it, but since she is already on her way, she will turn it off in the room instead. The room is at the other end of the corridor, and the beeping sound starts once again before she enters the room. She smiles at the patient and asks what she can help with. The patient says that it looks like his wound dressing has started leaking, and after looking, Anna-Maria sees that it needs to be changed. To ensure sterility, changing wound dressing is a long process and involves disinfecting your hands at least six times. When Anna-Maria removes the old wound dressing, she receives a new alert. She 37 6. Execution pulls the pocket containing the smartphone forward with her pinky finger to get a glance at the screen, and she thinks she can distinguish the word Room 8, or was it Room 9? The phone keeps beeping as she continues with the procedure. “You have much to do.”, the patient says, and Anna-Maria seems to hear a hint of guilt in his voice. Maybe it would have been better if she had left the phone outside the room. 6.4 Ideate The next step of the project was to make design suggestions for how the requirements could be implemented. This process began in the Ideate stage, where the methods of brainwriting, braindrawing and SCAMPER were used. During SCAMPER, it became evident that many issues had already been thoroughly discussed, allowing us to proceed to the next stage. Therefore, the method of Six thinking hats was skipped. 6.4.1 Brainwriting & braindrawing The first ideation session was done using the methods of brainwriting and brain- drawing by writing down and drawing ideas connected to each problem area, one at a time. Some ideas were easier to explain by writing than drawing, and vice versa; therefore, combining the methods supported all ideas. Each round started with three minutes to generate ideas about one problem area on paper individually. After this, the papers were switched to continue elaborating on the other person’s ideas for three more minutes. Finally, all ideas were discussed before moving on to the next problem area. After ideating for all problem areas, the ideas were refined by discussing possible improvements and challenges. Each idea was described in writing and sketched to make it easier for others to understand. Here, aspects such as size and functionality were decided. A large number of ideas were generated, and to select which ones to move forward with, the ideas were evaluated against the requirements. The ideas that did not meet the requirements were eliminated, except one. This idea utilised eye tracking, which the requirements do not recommend, and it was kept due to curiosity about the healthcare workers’ opinions about using a device similar to the current mixed-reality headsets. In total, seven ideas progressed to the next stage. 6.4.2 SCAMPER To further elaborate on the ideas and ensure that no improvements were overlooked, the method of SCAMPER was employed. The intention was to ask and discuss all questions from SCAMPER for each idea. However, after a few iterations, it became clear that all these questions had already been discussed during the previous ideation session. The decision was then made to move on to the next step of the process. Even though most of the discussion was repetitive, it became even clearer that feedback is important when interacting with touchless interfaces, such as using sounds, vibrations and possibly animations. 38 6. Execution 6.5 Prototype I The first iteration of the Prototype stage was performed to communicate the ideas from the previous stage, both within the team and with users. During this stage, different types of low-fidelity prototyping methods were used. 6.5.1 Low-fidelity prototyping The purpose of the first prototypes was to explore the ideas further and, later in the process, to receive early feedback. Therefore, low-fidelity prototyping was selected as an appropriate method. The ideas differed in their technical feasibility; some leaned more toward unconventional concepts, while others relied on more common technology. Consequently, the prototypes showed this diversity. For two of the seven ideas, digital illustrations and cardboard prototypes were created to provide an understanding of the dimensions and shapes. For the two most unconventional ideas, the projecting headphone and smart glasses, images of how they could look and be utilised were employed. Due to the technical limitations of these ideas, creating physical prototypes was considered too early at this stage of the process. The three remaining ideas were communicated by showing illustrations and trying out similar existing products borrowed from the Ascom office. All ideas utilised both voice control and gesture control. Wearable with top display and integrated earbud This wearable device is intended to be worn on the chest pocket (see Figure 6.4 for illustrations of the device). The user can interact with the device using voice, hand gestures, and touch. It has a touch-sensitive top display, letting the user quickly view the active output by tilting their head down. The user can choose to receive auditory information on command. To support the user in various situations, the user can choose to receive information through the speaker on the device or by wearing the integrated earbud. When wearing the earbud, the device automatically directs all auditory output to it, such as phone calls. This way, sensitive information can be withheld from the public. Only one earbud is provided to ensure that the user still remains present with patients and can hear sound from their surroundings. The earbud needs to fit both ears since a user can have hearing deficiencies in one ear. If the user engages in a phone call using the earbud, a phone icon on the front of the device will indicate that a call is in progress. When the user does not want to use the earbud, it will be placed in the dedicated holder on the side of the device, minimising the risk of it being misplaced and always being within reach. The ea