Designing for Semi-Autonomous Vehicles In-Vehicle Digital Communication During Low-Level Automation In Congested Traffic Environments Master’s thesis in Interaction Design & Technologies JESPER KJELLQVIST HAMPUS LIDIN Department of Computer Science and Engineering CHALMERS UNIVERSITY OF TECHNOLOGY UNIVERSITY OF GOTHENBURG Gothenburg, Sweden 2018 Master’s thesis 2018 Designing for Semi-Autonomous Vehicles In-Vehicle Digital Communication During Low-Level Automation In Congested Traffic Environments JESPER KJELLQVIST HAMPUS LIDIN Department of Computer Science and Engineering Division of Interaction Design Chalmers University of Technology University of Gothenburg Gothenburg, Sweden 2018 Designing for Semi-Autonomous Vehicles In-Vehicle Digital Communication During Low-Level Automation In Congested Traffic Environments JESPER KJELLQVIST HAMPUS LIDIN © JESPER KJELLQVIST, 2018. © HAMPUS LIDIN, 2018. Supervisor: Morten Fjeld, Department of Computer Science and Engineering Advisors: Joel Hammar & Jan Nilsson, Semcon Examiner: Staffan Björk, Department of Computer Science and Engineering Master’s Thesis 2018 Department of Computer Science and Engineering Division of Interaction Design Chalmers University of Technology and University of Gothenburg SE-412 96 Gothenburg Telephone +46 31 772 1000 Cover: Illustration of an HMI concept made during the project. Typeset in LATEX Gothenburg, Sweden 2018 iv Designing for Semi-Autonomous Vehicles: In-Vehicle Digital Communication During Low-Level Automation In Congested Traffic Environments JESPER KJELLQVIST HAMPUS LIDIN Department of Computer Science and Engineering Division of Interaction Design Chalmers University of Technology and University of Gothenburg Abstract The automotive industry is moving towards autonomous driving, and as more driv- ing task responsibilities are transferred to the motor vehicle, the driver can engage in more non-driving tasks. In this project, we have investigated driver behaviour with so called secondary tasks (STs) in semi-autonomous motor vehicles, and how an human-machine interface (HMI) for digital communication, and other STs, can be designed for this level of autonomy. We have sent out a survey, created con- cepts, implemented low- and high-fidelity prototypes, and conducted user tests, in order to find a solution which is both comfortable, efficient, and safe to use while driving. Our solution consisted of a system with an head-up display (HUD) by the windshield, and two touch sensitive trackpads mounted at either side of the steering wheel. The trackpads control the content shown in the HUD, by using common touch gestures, such as pressing, swiping, and typing with our own interpretation of a T9 input method, which we call Circular T9. In the end, we had insufficient data to conclude whether our solution was safe enough in a real driving setting. The feedback from the user tests have been generally positive towards the concept, but critical towards the high-fidelity prototypes, specifically that there is insufficient feedback from the input interface. Our hope is that this project will inspire other projects in designing HMIs for future motor vehicles. Keywords: Automotive, semi-autonomous driving, secondary tasks, human-machine interface, digital communication, interaction design, concept design, Circular T9, trackpads, head-up display. v Acknowledgements We want to thank all people at Semcon, Volvo, and RISE Viktoria, who have listened and given feedback to us during the project. We would like to give special thanks to some people, specifically Joel Hammar and Jan Nilsson at Semcon, for their continuous guidance and support throughout the project, Justyna Maculewicz and Claudia Wege at Volvo, for listening and giving feedback to our ideas, and our supervisor Morten Fjeld, for his support on the project and for giving advice on the user survey. Jesper Kjellqvist & Hampus Lidin, Gothenburg, June 2018 vii Contents Abbreviations xiii Definitions xv List of Figures xvii List of Tables xix 1 Introduction 1 1.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.3 Aim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.3.1 Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.3.2 Ethical considerations . . . . . . . . . . . . . . . . . . . 3 1.4 Research questions . . . . . . . . . . . . . . . . . . . . . . . . . 3 2 Theory 5 2.1 Levels of automation . . . . . . . . . . . . . . . . . . . . . . . . 5 2.2 Related studies . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.2.1 Locus of control . . . . . . . . . . . . . . . . . . . . . . . 6 2.2.2 Situation awareness . . . . . . . . . . . . . . . . . . . . . 7 2.2.3 Workload . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.2.4 NHTSA guidelines . . . . . . . . . . . . . . . . . . . . . 8 2.3 Current interfaces for digital communication . . . . . . . . . . . 9 2.3.1 Visual and auditory interfaces . . . . . . . . . . . . . . . 9 2.3.2 Text entry methods . . . . . . . . . . . . . . . . . . . . . 10 2.3.3 Notification methods . . . . . . . . . . . . . . . . . . . . 10 2.4 Statistical methods . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.4.1 Chi-squared test . . . . . . . . . . . . . . . . . . . . . . 11 2.4.2 System usability scale . . . . . . . . . . . . . . . . . . . 12 3 Methodology 15 3.1 Literature research . . . . . . . . . . . . . . . . . . . . . . . . . 15 3.2 User research . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 3.3 Concept development . . . . . . . . . . . . . . . . . . . . . . . . 16 3.4 Prototyping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 3.5 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 3.5.1 User tests . . . . . . . . . . . . . . . . . . . . . . . . . . 17 ix Contents 3.5.2 Data gathering . . . . . . . . . . . . . . . . . . . . . . . 18 3.6 Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 4 Planning 19 4.1 Activities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 4.2 Milestones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 4.3 Timeline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 5 Design Process 23 5.1 Project preparation . . . . . . . . . . . . . . . . . . . . . . . . . 23 5.2 User research . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 5.2.1 Market research . . . . . . . . . . . . . . . . . . . . . . . 24 5.2.2 Survey . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 5.2.2.1 Preparing questions . . . . . . . . . . . . . . . 25 5.2.2.2 Analysing the responses . . . . . . . . . . . . . 26 5.3 Concept ideation . . . . . . . . . . . . . . . . . . . . . . . . . . 26 5.3.1 Divergence: brainstorming . . . . . . . . . . . . . . . . . 26 5.3.2 Transformation: choosing and refining ideas . . . . . . . 27 5.3.2.1 Trackpad . . . . . . . . . . . . . . . . . . . . . 28 5.3.2.2 Touch surface . . . . . . . . . . . . . . . . . . . 28 5.3.2.3 Secondary HUD . . . . . . . . . . . . . . . . . 30 5.3.2.4 Steering wheel display strip . . . . . . . . . . . 30 5.3.2.5 Scenario mapping . . . . . . . . . . . . . . . . . 31 5.3.2.6 Evaluation of the initial concepts . . . . . . . . 31 5.3.3 Convergence: low-fidelity prototyping . . . . . . . . . . . 34 5.3.3.1 Prototyping . . . . . . . . . . . . . . . . . . . . 34 5.3.3.2 Concept walkthrough . . . . . . . . . . . . . . . 38 5.4 High-fidelity prototyping . . . . . . . . . . . . . . . . . . . . . . 39 5.4.1 Architecture overview . . . . . . . . . . . . . . . . . . . . 39 5.4.2 Implementation of the first concept . . . . . . . . . . . . 41 5.4.3 Implementation of the second concept . . . . . . . . . . . 44 5.4.4 Limitations of using standard touch devices . . . . . . . 46 5.5 User testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 5.5.1 User test of the low-fidelity prototype . . . . . . . . . . . 48 5.5.2 User tests of the high-fidelity prototypes . . . . . . . . . 48 5.5.3 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . 50 6 Results 51 6.1 Survey responses . . . . . . . . . . . . . . . . . . . . . . . . . . 51 6.2 Data from high-fidelity user tests . . . . . . . . . . . . . . . . . 54 6.2.1 System usability scores . . . . . . . . . . . . . . . . . . . 54 6.2.2 Control question performances . . . . . . . . . . . . . . . 55 6.3 Feedback from follow-up interviews . . . . . . . . . . . . . . . . 56 7 Data Analysis 57 7.1 Analysis of the survey data . . . . . . . . . . . . . . . . . . . . . 57 7.2 Analysis of the system usability scores . . . . . . . . . . . . . . 59 x Contents 8 Discussion 61 8.1 Findings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 8.2 Work process . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 8.3 Generalisations . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 8.4 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 9 Conclusion 67 Bibliography 69 A Hierarchical Task Analysis of Volvo XC60 Infotainment System I B Survey V B.1 Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . V B.1.1 Driving habits (1) . . . . . . . . . . . . . . . . . . . . . . V B.1.2 Background (2) . . . . . . . . . . . . . . . . . . . . . . . VI B.1.3 General driving habits (3) . . . . . . . . . . . . . . . . . VI B.1.4 Advanced driver assistance (4, 7) . . . . . . . . . . . . . VII B.1.5 Performing non-driving tasks while driving (5) . . . . . . IX B.1.6 Performing non-driving tasks while driving with active driver assistance (6) . . . . . . . . . . . . . . . . . . . . . . . . X B.1.7 City driving - performing non-driving tasks (9) . . . . . . XII B.1.8 Task interaction (10, 13) . . . . . . . . . . . . . . . . . . XIII B.1.9 City driving - performing non-driving tasks with active driver assistance (12) . . . . . . . . . . . . . . . . . . . . . . . XIII B.1.10 Expectations with driver assistance (14) . . . . . . . . . XIV B.1.11 Experience with driver assistance (15, 16) . . . . . . . . XV B.1.12 Followup Questions (17) . . . . . . . . . . . . . . . . . . XV B.2 Additional results . . . . . . . . . . . . . . . . . . . . . . . . . . XVI B.3 Additional chi-squared tests . . . . . . . . . . . . . . . . . . . . XVIII C User Roles XXXI D Low-Fidelity Prototype User Test XXXVII D.1 Subject 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . XXXVII D.2 Subject 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .XXXVIII D.3 Subject 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . XL E High-Fidelity Prototype User Test 1 XLIII E.1 Subject 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . XLIII E.2 Subject 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . XLIV E.3 Subject 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . XLIV E.4 Subject 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . XLIV E.5 Subject 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . XLV E.6 Subject 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . XLV F High-Fidelity Prototype Test 2 XLVII F.1 Subject 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . XLVII xi Contents F.2 Subject 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . XLVIII F.3 Subject 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . XLVIII F.4 Subject 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . XLVIII F.5 Subject 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . XLIX F.6 Subject 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . XLIX F.7 Subject 7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . XLIX xii Abbreviations ACC adaptive cruise control. 5, 39 ADAS advanced driver assistance system. 25, 26, 31, 39, 41, 51, 57, 61, 62, XVI, XVIII, XXXI AS auto-steering. 5, 35, 39 CC cruise control. 37 DIM driver information module. 5, 7, 9, 10, 24 FCWS forward collision warning system. 5 HDD head-down display. 9 HMI human-machine interface. v, 1, 2, 3, 15, 16, 17, 23, 24, 64, 67, 68 HTA hierarchical task analysis. 24, I HUD head-up display. v, 9, 23, 29, 30, 34, 39, 40, 41, 42, 43, 45, 47, 56, 62, 64, XXXVIII, XLIV, XLV, XLVII, XLVIII, XLIX LDWS lane departure warning system. 5 LOC locus of control. 6, 23 MA mode awareness. 2 MOL mental overload. 7, 23 MUL mental underload. 7, 23 MWL mental workload. 7, 23 NDS naturalistic driving study. 5 NHTSA National Highway Traffic Security Administration. 3, 5, 6, 7, 8, 10, 30, 67 OEM original equipment manufacturer. 8 PA pilot assist. 24, 44 xiii Abbreviations SA situation awareness. 7, 23 SEER Seamless, Efficient and Enjoyable user-vehicle inteRaction. 1, 15 ST secondary task. v, 1, 2, 5, 7, 8, 9, 23, 25, 51, 61, 62, 67 SUS system usability scale. 11, 12, 17, 18, 48, 50, 54, 57, 58, 59, 62, 67 xiv Definitions ACC Vehicles with ACC can adapt their speed to keep a safe distance to the vehicle in front of it. 5 AdaptIVe EU sponsered project for the development of autonomous vehicles. 1 ADAS A driving assistance system including advanced features, such as AS and ACC. 25 auto-steering A technology which keeps vehicles within the lane markings of the road. 5 DIM Commonly referred to as the “dashboard”, where driving relevant indicators and gauges are placed. 5 emoji Ideograms and smileys commonly used in electronic messaging. 10 external LOC A person’s belief of their life being controlled by external factors, which they cannot influence. See: locus of control, 1, 6 FCWS A warning system which warns the driver when the vehicle is in near- collision in the front. 5 HMI The interface between a human and a machine (for instance a car), which enables the interaction between them. v, 1 infotainment system A system in an automobile responsible for supplying the driver with necessary trip information and entertainment, such as navigation and music. 1 internal LOC A person’s belief of having control over their life. See: locus of control, 1, 6 LDWS A warning system which warns the driver when the vehicle is drifting out- side the lane. 5 locus of control The psychological degree of a person’s perception of control in a given context. 6 mode awareness The degree of awareness a person has over the mode or the ac- tivated features within a motor vehicle. 2 xv Definitions naturalistic driving study Driving studies performed in a driver’s natural envi- ronment, without controlled experiments and interference. 5 NHTSA U.S. government agency for traffic and transportation safety. 3 pilot assist A system in semi-autonomous Volvo cars, which includes the AS and ACC driving assistance systems. 24 SAE International International automotive standards organisation. 5 SAE Level 2 SAE International standard, partial automation by both ACC and AS [49]. See: ACC, auto-steering & SAE International, 2, 3, 5, 64 SAE Level 1 SAE International standard, driver assistance by either ACC or AS [49]. See: ACC, auto-steering & SAE International, 5 SAE Level 0 SAE International standard, no automation [49]. See: SAE Interna- tional, 5 secondary task A non-driving related task. v, 1 situation awareness The degree of awareness a person has over the current situ- ation. 7 system usability scale A standardised, subjective scoring method of the effec- tiveness, efficiency and satisfaction with a system. 11 xvi List of Figures 2.1 The six SAE levels of automation. Image: [48]. . . . . . . . . . . 6 4.1 Gantt planning chart of the project. . . . . . . . . . . . . . . . . 22 5.1 A flow chart representing the logical flow of the survey. . . . . . 25 5.2 A user role constructed for casual car drivers using ADAS features. Symbols: [19]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 5.3 Ideas related to the trackpad concept. . . . . . . . . . . . . . . . 29 5.4 Ideas related to the touch surface concept. . . . . . . . . . . . . 29 5.5 Ideas related to the secondary HUD concept. . . . . . . . . . . . 30 5.6 Ideas related to the display strip concept. . . . . . . . . . . . . . 31 5.7 Scenario mapping allowed us to go through our ideas and discover how they work in different scenarios. . . . . . . . . . . . . . . . 32 5.8 Early concept of the trackpad idea. . . . . . . . . . . . . . . . . 33 5.9 Some sketches for different navigation structures of the HUD, before making the first digital prototype. . . . . . . . . . . . . . . . . . 34 5.10 The tangible prototype of a steering wheel with the trackpads. Sym- bols: [54]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 5.11 Digital prototypes showing the different installations of the modular trackpads. The left illustrations show an inner rim construction for the trackpads, while the right illustrations show an outer rim con- struction. The upper illustrations show a quarter past 9 formation of the trackpads used for manual driving, while the lower illustrations show an angled formation of the trackpads used for autonomous driv- ing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 5.12 One transition in the digital storyboard, which shows the interactions of the input (trackpads), as well as the response and information visualisation of the output (HUD). The first frame shows the state of the system before scrolling anticlockwise, and the second shows the state after this action. Images and symbols: [11, 12, 13, 14, 22, 28, 29, 30, 33, 34, 35]. . . . . . . . . . . . . . . . . . . . . . . . . . . 37 5.13 The digital storyboard showing the standby mode, with the side buttons and trackpads shown in the bottom. Images and symbols: [11, 12, 13, 22, 31, 32, 33, 34, 35, 36, 37, 38]. . . . . . . . . . . . 38 xvii List of Figures 5.14 The setup of the devices used for the prototypes. The tablet (acting as the HUD at the top is where the connections to the two phones (acting as controllers) are initiated. The controllers then send input commands to the HUD, which updates its state accordingly, and fi- nally sends a response back to the controllers, which in turn update their states. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 5.15 The state machine of the prototype for the first concept. There are three modes in which the driver can operate: standby mode, main menu mode, and text writing mode. The last mode, before a tran- sition to the standby mode occurs, will be the mode transitioned to when leaving standby mode. . . . . . . . . . . . . . . . . . . . . 41 5.16 The views of the first prototype, when in standby mode. Symbols: [12, 13, 22, 31, 32, 33, 34, 35, 36, 37, 38]. . . . . . . . . . . . . . 42 5.17 The views of the first prototype, when in main menu mode. Symbols: [12, 13, 14, 22, 28, 29, 33, 34, 35]. . . . . . . . . . . . . . . . . . 43 5.18 The views of the first prototype, when in text writing mode. Symbols: [13, 33, 34]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 5.19 The state machine of the prototype for the second concept. It is similar to the first concept, with the addition of the text editing mode. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 5.20 The views of the second prototype, when in text editing mode. Sym- bols: [12, 22, 23, 24, 25, 26, 27, 33, 34, 35, 36]. . . . . . . . . . . 46 5.21 The views of the second prototype, when in text writing mode. Sym- bols: [12, 22, 23, 33, 34, 35]. . . . . . . . . . . . . . . . . . . . . 47 5.22 A snapshot of a video of one of the test subjects for the first user test simulation. Images and symbols: [11, 12, 13, 33, 34, 54]. . . . . 48 5.23 The steering wheel prototype used for the second and third user test. Symbols: [12, 22, 23, 33, 34, 35, 54]. . . . . . . . . . . . . . . . . 49 6.1 General information from the 153 respondents. . . . . . . . . . . 52 6.2 The respondents’ frequency and comfort of voice calls, driving to some extent in city environments, comparing between different categories of drivers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 B.1 The respondents’ frequency and comfort of text messaging, comparing between different categories of drivers. . . . . . . . . . . . . . . XVII C.1 A user role constructed for professional truckers not using ADAS. Symbols: [7]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . XXXI C.2 A user role constructed for professional car drivers not using ADAS. Symbols: [19]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . XXXII C.3 A user role constructed for daily casual car drivers not using ADAS. Symbols: [19]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . XXXIII C.4 A user role constructed for occasional casual car drivers not using ADAS. Symbols: [19]. . . . . . . . . . . . . . . . . . . . . . . . . XXXIV C.5 A user role constructed for professional truckers and car drivers using ADAS. Symbols: [7]. . . . . . . . . . . . . . . . . . . . . . . . . XXXV xviii List of Tables 4.1 Distribution of man-hours over the different activities in the project. 20 4.2 Milestones and deadlines of the project. . . . . . . . . . . . . . . 21 6.1 A comparison between daily and occasional drivers, driving to some extent in city environments, who may or may not be using ADAS. 52 6.2 The test subject responses from the follow-up questionnaire for pro- totype test 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 6.3 The test subject responses from the follow-up questionnaire for pro- totype test 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 6.4 Control question performances of the test subjects for high-fidelity prototype 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 6.5 Control question performances of the test subjects for high-fidelity prototype 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 7.1 Samples of answers from motor vehicle drivers, when asked how fre- quently they make voice calls. The answers are categorised into those who use ADAS features and those who do not. . . . . . . . . . . 58 7.2 The calculation of the expected values E(X). . . . . . . . . . . 58 7.3 The calculation of the chi-squares χ2. . . . . . . . . . . . . . . . 58 7.4 The SUS scores from the test subjects of the first high-fidelity proto- type iteration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 7.5 The SUS scores from the test subjects of the second high-fidelity prototype iteration. . . . . . . . . . . . . . . . . . . . . . . . . . 59 B.1 The number of responses from the different nationalities in the “Other” sector of Figure 6.1a. . . . . . . . . . . . . . . . . . . . . . . . . XVI B.2 The null hypothesises that were rejected. . . . . . . . . . . . . . XVIII B.3 The calculation of the chi-squares χ2 of the frequency of engaging in text messaging, comparing between car drivers with and without ADAS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . XIX B.4 The calculation of the chi-squares χ2 of the frequency of engaging in text messaging, comparing between daily and occasional car drivers. XX B.5 The calculation of the chi-squares χ2 of the frequency of engaging in text messaging, comparing between professional and casual car drivers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . XXI B.6 The calculation of the chi-squares χ2 of the frequency of engaging in voice calls, comparing between car drivers with and without ADAS. XXII B.7 The calculation of the chi-squares χ2 of the frequency of engaging in voice calls, comparing between daily and occasional car drivers. XXIII xix List of Tables B.8 The calculation of the chi-squares χ2 of the frequency of engaging in voice calls, comparing between professional and casual car drivers. XXIV B.9 The calculation of the chi-squares χ2 of the comfort of engaging in text messaging, comparing between car drivers with and without ADAS. XXV B.10 The calculation of the chi-squares χ2 of the comfort of engaging in text messaging, comparing between daily and occasional car drivers. XXVI B.11 The calculation of the chi-squares χ2 of the comfort of engaging in text messaging, comparing between professional and casual car drivers. XXVII B.12 The calculation of the chi-squares χ2 of the comfort of engaging in voice calls, comparing between car drivers with and without ADAS.XXVIII B.13 The calculation of the chi-squares χ2 of the comfort of engaging in voice calls, comparing between daily and occasional car drivers. XXIX B.14 The calculation of the chi-squares χ2 of the comfort of engaging in voice calls, comparing between professional and casual car drivers. XXX xx 1 Introduction The environment in today’s motor vehicles are no longer just about operating the vehicle, but to supply the driver and the potential passengers with additional in- formation and entertainment. Automobiles are thus often equipped with an info- tainment system, which supply the driver various secondary tasks (STs) for the trip, such as navigational guidance and music players. The challenge in the design of these infotainment systems is to make them as clear and safe to use as possible, since it is likely the driver will be using the system while driving, to some extent. Using the system often requires the driver to divert their eyes from the road, which can distract them from the main driving tasks and the traffic environment [16]. In order to prevent dangerous traffic situations related to driver distraction, different interfaces between drivers and vehicles (automobiles or trucks) need to be evaluated. 1.1 Background This project is part of the Seamless, Efficient and Enjoyable user-vehicle inteRaction (SEER) project, which is a collaboration between the stakeholders Volvo Technology, Volvo Cars, RISE Victoria, and Semcon [47, 53]. The SEER project focuses on how to improve the experience with STs for semi-autonomous cars and trucks. The project also aims to find out more about the behaviours of drivers when using certain automation systems, and how this knowledge can be used to improve current designs, by developing design concepts for different ST interactions. The general findings and knowledge gained from the project are meant to be available to the public domain, in order to increase innovation within the automotive industry, although some parts may be kept secret within the companies. Vinnova, Sweden’s department of innovations, is the main external financier of the project. The other stakeholders are responsible for the research and the development of the concepts. One of Volvo’s visions is traffic safety first, and they are currently investing time in researching how to make autonomous motor vehicles safer, being part of the EU sponsored project AdaptIVe [55, 56]. RISE Victoria’s goal is to help the automotive industry grow in a sustainable way, providing research projects in collaboration with industry partners [46]. Semcon works in several industries providing expertise in fields like design and development, amongst other, and their goal is to make better user experiences for different technologies [50, 51]. 1 1. Introduction 1.2 Purpose With the the automobile and truck industry shifting more and more towards au- tonomous vehicles, car manufacturers are showing an increasing interest in different driver assistance and automation technologies. As more of the driving tasks are transferred to automation systems, it is important that the vehicle’s human-machine interface (HMI) is designed in such a way, so that it prevents or hinders the driver from making unsafe assessments regarding the surrounding traffic. People with ex- ternal LOC tend to be less attentive when driver automation is active, compared to people with internal LOC [18]. Therefore, there will inevitably be cases where a lack of attention can potentially lead to dangerous traffic situations. This creates a need to find specialised in-vehicle solutions for STs that are safe to use. This is especially true for highly congested traffic situations, such as in traffic jams and city-driving, where there are several things for the driver to be aware of and attentive for. 1.3 Aim In this project, the aim is to improve communication-related driver engagement in STs during low levels of automation (specifically SAE Level 2 and below). One of the goals is to investigate best practices for STs regarding different digital commu- nication interactions, such as phone calling, text messaging (SMS and social media instant messaging), and email. Another goal is to develop a concept, which describes an interface (HMI) for receiving and sending this type of digital information. The HMI should be usable, efficient, and safe enough to not compromise the usability of the service, or the safety of the driver and other road users. A simple prototype will be developed, in order to test and evaluate the concept. 1.3.1 Scope While we will try to consider many different technologies for the concept, there are two specific ones we will not consider, namely eye-tracking and voice recognition. The project stakeholders haved deemed them too early in the development to be adopted into a stable and reliable interface. However, at the time of writing this report, advances with voice recognition have been made, so we might still consider it for a secondary mode of interaction. Additionally, we will not consider any as- pects of mode awareness (MA) of the automation state, as we will only focus on the communication aspects of an HMI concept. The core emphasis lies on creating a concept for such an HMI, while thinking outside the box regarding the interaction and user experience. This means that research will not be restricted to the automo- tive industry, but other industries where communication is used in critical situations as well. The HMI concept will be designed to be used in congested traffic situations, where there is a high requirement for attention, such as during traffic jams and city driving. 2 1. Introduction 1.3.2 Ethical considerations The project will be about sending and receiving digital information while driving, which could impose some ethical issues, as it is illegal1 (in Sweden) to drive while holding communication equipment. This means that the concept must be safer than traditional cell phones. For the project itself, since user tests will be conducted, extra care must be taken with the participants in order to make sure that the tests are carried out professionally. 1.4 Research questions Given the context of a motor vehicle with an automation level of SAE Level 2 and lower, this paper will focus on the following research questions: (1) What safe interactions exist for using in-vehicle interfaces for digital commu- nication, while driving in congested traffic environments? (2) How can current HMIs for digital communication (infotainment systems) be improved, such that: i) it does not divert the driver’s attention in congested traffic environments, and ii) it lends itself as an efficient tool for the driver, without sacrificing comfort and safety? In the first research question, we will define “safe” as how well these interactions comply with common guidelines for in-vehicle equipment. Specifically, the National Highway Traffic Security Administration (NHTSA) guidelines, which are part of a comprehensive document explaining best practices, will be used as a reference. 1According to 4 ch. 10 e § in Trafikförordningen (SFS 2017:1284). 3 2 Theory In this chapter, we will bring up relevant studies and theoretical concepts regarding driver behaviour and autonomous driving. Most of the theory discussed here will be about research relevant to driver distractions. The guidelines from NHTSA regarding in-vehicle electronic equipment will be brought up as well. We will also introduce some technologies currently used in cars regarding digital communication and entertainment, as well as give some suggestions of other interfaces which may be suitable for a semi-autonomous driving context. Lastly, some theory surrounding system evaluation methods and other statistical methods will be presented as well. 2.1 Levels of automation The standards organisation SAE International have defined six levels of automation in motor vehicles, which are presented in Figure 2.1. The levels, referred to as SAE Level 0 thru 5, describe which technologies a motor vehicle must have in order to be classified at the corresponding level, under some specific condition. SAE Level 0 describes the level which no automation at all is present and the human driver is in full control of all driving-related tasks. However, it does not exclude warning and intervention systems such as forward collision warning system (FCWS) and lane departure warning system (LDWS). SAE Level 1 requires that either adaptive cruise control (ACC) or auto-steering (AS) is present, while SAE Level 2 requires both. All SAE levels 0 thru 2 depend on the human driver to monitor the environ- ment, meaning that they must stay focused on the road and be able to take back control whenever the automation systems become deactivated. The higher levels of automation, SAE Level 3 thru 5, depend on the system to monitor the environment. 2.2 Related studies There are several different concepts that are related to driver behaviour, which will be brought up in this section. Previous studies have shown that distracted drivers are more likely to end up in risky traffic situations. Naturalistic driving studies (NDSs) have shown that STs unrelated to the actual driving of the vehicle (for instance making phone calls or using an in-vehicle navigation system), are one cause for diverting the driver’s attention from the road [16]. Other causes for driver 5 2. Theory Figure 2.1: The six SAE levels of automation. Image: [48]. inattention include drowsiness caused by long drives and managing driving related equipment and interfaces, such as the driver information module (DIM), during stressful situations. In autonomous vehicles, one study had found that the driver’s attention is shifted more towards a specific in-vehicle interface of the experiment, that shows the mode or the status of the automation functions, although few long glances at the display in risky situations did rarely occur [17]. There exists several different guidelines for designing HMI’s in cars today. We choose to follow the NHTSA guidelines, the reasoning for this is that the other guidelines that exist is not as comprehensive and new as the NHTSA guidelines. The other ones we looked at was the JAMA and Alliance guidelines. JAMA guidelines come from Japan and Alliance from Europe. We will describe the NHTSA guidelines more in detail later in this chapter 2.2.1 Locus of control In psychology, locus of control (LOC) is the measure of a person’s perception of how much control they have over a given situation. LOC is generally divided into two categories: external LOC and internal LOC. People with strong external LOC tend to think that they do not have much control over their lives, relying more on fate and luck. On the other hand, people with strong internal LOC tend to instead think that they have substantial control over their lives, where their actions can have a 6 2. Theory direct impact on their current situation. Research suggest that people with strong external LOC are more careless in traffic and more likely to end up in accidents, than people with a strong internal LOC [18]. It is also suggested that automation does not have any effect on this driver behaviour [52]. 2.2.2 Situation awareness situation awareness (SA) is a person’s perception and understanding of events and information in their vicinity. It involves understanding the consequences of actions, either your own or external ones, and how these events will impact your goals, either now or in the future. People with high situation awareness are less prone to make mistakes, i.e. less accidents due to “human error”, while people with low situation awareness more often will make these kind of mistakes, often in stressful situations where the information flow is high. This can be proven dangerous as accidents in cars or other vehicles can have fatal consequences, for the driver and potential passengers or other road users. According to Endsley’s model of situation awareness, there are three levels of understanding: perception of the elements, comprehending the situation and projecting future status [8]. 2.2.3 Workload When driving a car, there are several things for the driver to consider. There could be internal or external events that need to be handled appropriately to avoid risks. The driver need to focus on the task of driving the vehicle itself, and many other factors during the course of the driving session. This can be described as the workload of the driver, which can be separated into three categories: visual workload, motor workload, and mental workload (MWL). This is similar to how NHTSA divides their distraction factors, explained in the next section. In short, visual workload refers to how many things the driver has to look at, while motor workload refers to the amount of work performed by their motor skills [6]. MWL is used to describe how many things the driver has to consider at the same time, by their neurophysical, perceptional and cognitive abilities [6]. The point when the MWL reaches a critical level, i.e. the driver is feeling overwhelmed by all the internal and external events and actions to consider, is called mental overload (MOL). Contrary, if the driver feels that they have too little things to consider and as a result becomes less attentive, it is called mental underload (MUL). Both conditions should be avoided, as they are deemed equally serious, but for different reasons. Monotonous driving environments, such as highway driving, could potentially be a risk for MUL, while MOL is more often associated with stressful driving, e.g. congested traffic situations in cities [52, 57]. 7 2. Theory 2.2.4 NHTSA guidelines A report of guidelines and recommendations have been compiled by NHTSA re- garding in-vehicle electronics, which outlines how ST devices distracts the driver and what to consider when designing vehicle interfaces, in order to prevent ac- cidents [39]. The report defines three types of distractions: visual, manual, and cognitive. Anything that makes the driver divert the eyes from the road, is consid- ered a visual distraction. For instance, the DIM has many indicators which forces the driver to look down. Manual distractions refer to the type of distraction where the driver takes their hands of the steering wheel, in order to operate another non- driving related device, such as a cell phone. Lastly, cognitive distractions include events and activities which divert the driver’s attention from their main driving tasks. In the report, NHTSA have taken results from several driver distraction studies, and determined the risk odds ratios from various secondary tasks in both cars and heavy trucks [16, 39, 40]. A risk odds ratio of 1 means it is as safe as the baseline measures. Ratios above 1 indicate STs with an increase in driving risk, while ratios below 1 work as protective ST, which make it safer to drive doing these STs rather than doing no ST at all. Examples of risk odds close to 1 are talking or listening to a hand-held phone in both trucks and cars, and adjusting the instrument panel in a truck. The protective STs that are mentioned are auditory-visual tasks, and include talking in hands-free phone in a truck, or interacting with a passenger in both. The risk odds increase when performing STs such as reading, reaching for moving objects, or text messaging on a mobile phone, all visual-manual tasks. All the tasks requiring the driver to operate a hand-held phone are not recommended, according to NHTSAs guidelines. Text messaging on a phone (in a truck) had the highest risk odds ratio out of all STs. The NHTSA guidelines are divided into three phases, with the first phase (which only considers visual-manual in-vehicle equipment) being the only one proposed so far [39]. The guidelines are meant to guide original equipment manufactur- ers (OEMs) to design in-vehicle equipment that discourage drivers from high risk STs, and are aimed only towards light vehicles, such as passenger cars and lighter trucks and buses. The guidelines specifies STs which are inherently distracting to the driver, including viewing non-driving related images and videos, reading auto- scrolling texts, using more than 6 button or key presses for text input, and reading more than 30 alpha-numeric characters of text. NHTSA recommends OEMs to de- sign their in-vehicle equipment, so that it prevents the driver from performing unsafe STs while driving. The guidelines also state two test methods, which can be used to determine whether a specific ST should be performed while driving or not. The first test method is to measure the glance time away from the road, having a threshold of at most 2 seconds of glance time away at any one time, and at most 12 seconds of cumulative glance time away, during the performance of the task. The second test method uses a visual occlusion technique to ensure that a task can be performed while driving using 1.5 second glances and a cumulative glance time away from the road of no 8 2. Theory more than 9 seconds. Lastly, the guidelines state that ST device functions should be designed so that it requires at most one hand to operate (including input devices mounted on the steering wheel), and that the active display of the device should be located as close as possible to the driver’s forward line of sight, with a maximum of 30° of downwards viewing angle. 2.3 Current interfaces for digital communication There were several types of interfaces for digital communication we thought of be- fore the design process of the project, that seemed interesting to try out. The aim of the project was to see which one fits best for our goals (safety, comfort, effi- ciency). Communication in cars these days are most often done via a mobile phone, either by talking (hands-free or hands-on) or text messaging. Many modern cars have infotainment systems with touch screens, which can be connected to a mobile phone. These infotainment systems often come with complementary tangible input interfaces, such as buttons and knobs, used to navigate menus and input text. The interaction between the driver and the desired digital information can be broken up into incoming and outgoing information. Interfaces for incoming information can be both visual and auditory. All cars and trucks have a DIM near the line of sight of the road, which shows text and ideograms of important driving related information, usually warnings about the engine, fuel and slippery roads, but also things such as the driving distance until the fuel tank is empty. By using displays near the driver’s line of sight, such as a head-up display (HUD), similar solutions could be made for digital communication, whether it is to show full text messages, or just notifications. Outgoing information can be sent using different keyboard typing methods and touch gestures, preferably near where the driver has their hands on the steering wheel, in order to minimise the manual distraction. 2.3.1 Visual and auditory interfaces Communication revolves around the exchange of information between two parties, and there are several ways of receiving information, which may or may not be obtru- sive to the driver. The information has to be conveyed in an efficient, comfortable and safe way, according to the project requirements. Visualising digital commu- nication while driving is a task that have been tried with several interfaces and techniques before. The most common way to visualise information to the driver is, as mentioned, via the DIM, which could be either by analogue (gauges and meters) or digital (displays) components. Additionally, there is often an infotainment sys- tem in the middle-front compartment of the car, often showing non-driving related information. The problem with DIMs and infotainment systems is that they are head-down displays (HDDs), meaning that they require the driver to take their eyes of the road. A solution to this is to bring them closer to the line of sight, in the form of a HUD, for instance by having a projected image on a small screen by the 9 2. Theory windshield. Another form to consider is speech synthesis, having incoming messages be read to the driver. This frees up the visual distraction, but may introduce new problems, such as cognitive distractions. 2.3.2 Text entry methods Text input is one of the major methods of digital communication today, used, for instance, when writing emails, text messages, or social media status updates. There are several text entry methods which may be beneficial for an automotive environ- ment. The numerical key pads used on older mobile phones have the advantage of eliminating the visual distraction, since one can use the physical affordance of the keys to input characters. In combination with the T9 word prediction software, the number of key presses can be reduced. Modern smart phones mostly use the XT9 prediction software together with the QWERTY keyboard layout, which predicts words the user types depending on the letters the have written so far, and thus may reduce the number of errors the user makes. Swipe is another text input method that can be used on QWERTY keyboards. By swiping your finger across the keys you want to use in your word, Swipe can anticipate which word you wanted to write. This doesn’t need to be as precise as hitting individual keys with standard QWERTY input methods, potentially making it less error prone. Another form of text input is Scribble, which is a gesture based input method. Scribble works by having the user “scribble” individual characters or a whole word as you would write it on paper, and the software would then translate this into digital text. This has the same potential as the numerical keypads for the older mobile phones, since there is no need for any of the affordances present in key pads, and thus one could potentially learn to scribble without looking. As mentioned earlier, in order to minimise the manual distraction of an input inter- faces, we could, for instance, have the input device mounted directly on the steering wheel. This may be beneficial in the sense that it allows the driver to quickly take control over the vehicle in critical situations. One idea for such an input device is a knob at the back of the steering wheel, which the driver could use to navigate through a list of options shown on a display, for instance auto-generated replies to a text message1. Another type of interface could be a split keyboard, much like what is used in iOS on their iPad devices, making it potentially easier to type when compared to a full keyboard on one side of the wheel. However, as the NHTSA guidelines has stated, using both hands for an interface is not recommended, so extra care must be taken when considering bimanual input interfaces. 2.3.3 Notification methods There may be some cases where it is not appropriate to convey the full message to the driver, for instance when in highly congested areas. An alternative would be to 1Basson, S.H., Bravin, S.E., Huber, W.B., Kanevsky, D., Noll, A.J., and Skwersky, A., (2017). Messaging in attention critical environments. US9680784 B2 10 2. Theory notify the driver of an incoming message, without being too obtrusive. The DIM works this way, by lighting up indicators that notify the driver about some aspects of the car, while warning sounds are used to immediately warn the driver about an issue requiring their immediate attention. The concept of using ideograms could perhaps be used in a HUD, for instance in the form of emojis to express simple emotions. Another notification device could be haptic feedback in the driver seat or in the steering wheel. Different vibration patterns could be used to convey different types of notifications. 2.4 Statistical methods There are some statistical methods which are useful for gaining insight about some set of data. There are two methods, the chi-squared test and system usability scale (SUS), which will be presented in this section. The chi-squared method is useful for comparing nominal and ordinal data, using hypothesis testing, and SUS is used to compare two or more systems, or system versions, with each other. 2.4.1 Chi-squared test The chi-squared test is a statistical method, which is using hypothesis testing to see whether there are any significant difference between categories of data [20]. Hypothesis testing works by stating a null hypothesis, which states no difference in the data in question, and an alternative hypothesis, which state that there is a significant difference. The goal of this test, is to reject the null hypothesis in favour of the alternative hypothesis, in order to confirm a difference, or accept the null hypothesis, in order to confirm no difference. The chi squared test works by first finding categories of nominal data to test on, such as, in the case of drivers, those who drive daily and those who drive occasionally, and those who are male and those who are female. Nominal data are data that have no intrinsic value, such as the daily and occasional driver categories mentioned above. Ordinal data is data that can be ranked or ordered relatively to each other, but does not have a meaningful value to compute a statistic for [21]. Chi-squared, denoted χ2, is given by the following equation: χ2 i,j = (xi,j − E(X)i,j)2 E(X)i,j , where xi,j is the number of sample points in a pair of options of the respective categories, and E(X)i,j is the expected number of sample points. In other words, E(X)i,j is calculated by taking the product of the total number of sample points, Ci and Dj, for options i and j of the two respective categories (such as the male-female and daily-occasional categories mentioned previously), and dividing by the actual number of sample points xi,j. The relation is given by: 11 2. Theory E(X)i,j = (Ci ×Dj)/xi,j By computing χ2 i,j for each possible combination between the categories (in our example there are four: male - daily, male - occasional, female - daily, and female - occasional) and adding the results, you get the final χ2 value. Before we can use this value to see if we will reject the null hypothesis or not, we first have to specify a significance threshold (α), where α = 5% is a commonly used value. Having a lower value for α will strengthen the statistical likelihood of the rejection being valid. We also need the degrees of freedom (ν), which is given by ν = (N −1)× (M −1), where N and M are the number of options in each category, respectively. Using χ2 and ν, a P -value can be obtained, which is a percentage between 0% and 100%. If the P < α, then we can reject the null hypothesis. 2.4.2 System usability scale The SUS is a scoring metric for system usability and efficiency, and can be calculated using the answers from 10 subjective questions regarding the system. Each question is answered with a value between 1 and 5, where 1 means strongly disagree and 5 means strongly agree, and values in between are a linear interpolation of the two extremes. The questions that are used for the SUS are: 1. I think that I would like to use this system frequently, 2. I found the system unnecessarily complex, 3. I thought the system was easy to use, 4. I think that I would need the support of a technical person to be able to use this system, 5. I found the various functions in this system were well integrated, 6. I thought there was too much inconsistency in this system, 7. I would imagine that most people would learn to use this system very quickly, 8. I found the system very cumbersome to use, 9. I felt very confident using the system, and 10. I needed to learn a lot of things before I could get going with this system. The questions alter in the way they are phrased. Odd-numbered questions are phrased such that a 5 would increase the score, while even-numbered question are phrased such that a 1 would increase the score. The calculation of the score for a given question Qn(x), where n is the question number and x is the answer value between 1 and 5, is given by: 12 2. Theory Qn(x) = { (x− 1)× 2.5, n odd (5− x)× 2.5, n even Thus, with a set of answers {xn : 1 ≤ n ≤ 10}, the total score S is then given by: S = 10∑ n=1 Qn(xn) 13 3 Methodology This project aims to reveal more insights and increase our knowledge about driver behaviours when partial automation is active. Therefore, we used methods mostly for developing concepts, conducting evaluation studies, as well as performing user research [2, 3, 42, 44]. Some methods for digital prototyping (software or hardware) have also been used, in order to create a prototype around a particular technol- ogy [42]. We worked iteratively by first finding a concept for a particular tech- nology or communication form, and then tested it by conducting simulations and user observations in a controlled laboratory setting, looking at different measures for attentiveness [44]. Our target driver group include professional, experienced, intermediate, and beginner drivers, of varying ages and familiarity with technology. The results were evaluated and used for the next iterations of the concept. Several user tests were conducted, so that meaningful conclusions could be drawn from the conceptual designs. Our design process consisted of practical research methods, such as observations and data gathering, and ideation methods such as brainstorming sessions and making scenarios, in order to spark ideas for potential HMI concepts [41, 45]. We looked at similar studies and their methodology when designing the concept, but were not bound to follow their process in all aspects [10]. We adapted our process to fit our requirements, not worrying too much if our exact process has been done before. Looking at current HMI designs in semi-autonomous vehicles such as Tesla, Volvo, and BMW was also helpful to us when designing the concept. The entire workflow of our design process can be described using Jones’ model [15]. The Jones’ model describe three phases: divergence, transformation, and conver- gence. During the divergence phase, the understanding of the domain is improved, and several ideas are generated using some form of ideation method. The transfor- mation phase is meant to refine and more closely define the ideas generated from the divergence phase. Finally, the convergence phase is meant to test out the concepts in the form of prototypes. 3.1 Literature research The initial phase of the project involved a couple of stakeholder meetings with Volvo and Semcon. In addition, each member of this project performed a simu- lation conducted by the stakeholders of the SEER project, which tested different 15 3. Methodology types of typing interfaces during manual and automated driving. To get a better understanding of the problem domain, several pieces of literature, articles, technical reports, and internal documents, have been read [5]. Most of the literature research was done in the beginning of the project, but an ongoing research effort was made as we went on to develop the concepts and plan for the user tests. 3.2 User research Before developing a concept, we needed to specify the driver needs. One way to do that was to perform ethnographic interviews and observations around drivers and drivers of semi-autonomous vehicles, if participants can be found early on [5]. Another way was to study already existing reviews and perform task analyses on various semi-autonomous vehicles, such as Tesla, BMW, and Volvo [43]. A com- bination of these methods gave us relevant data regarding driver behaviours and expectations with automation systems, as well as these systems’ potential flaws and quirks. The user research data was compiled into several user roles, describing a class of users in terms of their needs, interests, expectations, and behavioural patterns in relation to semi-autonomous driving [3]. By looking at these qualities, we were able to identify the experience and end goals of the drivers, and prepare to design for the visceral and behavioural aspects of the concept, respectively. Experience goals describe the sensations of the interface, for instance how safe it feels or how well it keeps the driver focused. End goals describe what motivations lie behind the tasks being performed on the HMI. 3.3 Concept development The HMI concept was first manifested by having a brainstorming session, from which user requirements have been established prior in the user reseasch phase [45]. To refine the concept, we took the knowledge from the user research, i.e. the driver goals, and used it to form various scenarios [4]. The scenarios described different situations and contexts in a driving environment, giving us additional design require- ments, in addition to the safety, efficiency, and comfort requirements given by our stakeholders. To get further understanding of the driver engrossment in a driving environment, methods such as scenario mapping was utilised [9]. This was useful for solidifying our thoughts onto paper, and to use it to draw conclusions from. During the project, we came up with ideas that involved a specific kind of interaction, not currently found in an mobile phone application or some other form of interaction device. Due to this, we built and programmed a digital prototype capable of carrying out the core aspects of what we wanted to test [42]. The main purpose was to recreate the desired interaction, not create a fully functional prototype, thus the Wizard of Oz method was used in conjunction [42]. 16 3. Methodology 3.4 Prototyping For testing out the functions of the concept, low-fidelity and high-fidelity prototyping was used [42]. This is useful in order to convey the initial impression of the design to users, without having to explain it to them through conversation. Instead, they can try out the design themselves and get a feel for it. This also helps when getting more relevant feedback from the user tests. When making low-fidelity prototypes for physical products, it is usually made with simple materials, such as paper and cardboard, that are fast and easy to get done [42]. When prototyping software, there are special tools and software which specialise in making it as easy as possible making a visual or interactive substitute of the system. Often there are only a few core functions which are mimicked and implemented this way. The high-fidelity prototypes are implemented by code, in order to get richer response from the system you’re designing, also by focusing on the most important and central aspects to test. 3.5 Evaluation When an iteration of the concept had been completed, we used it and its prototype to evaluate it with driver test subjects, as outlined in this section. During the eval- uations, we gathered data through different ethnographic data gathering methods and analysed them for the next iterations. 3.5.1 User tests For the evaluation of the concepts, empirical research methods were employed. We used simulations as one method, which worked as a usability test, to test how efficient the HMI was [2]. After the simulation, we collected answers from the SUS metric, as described in Section 2.4.2, to be able to compare between the different prototype iterations. During the simulations, observational and correlational methods was used to iden- tify and confirm links between driver inattention and traffic safety, when using a particular interface for communication [41, 21]. Other studies have used glance time and number of glances away from the road as a measure of driver distraction. We asked control questions during the simulation regarding the traffic environment, in order to measure the attentiveness of the test subjects. In order to catch other observational data, we asked for the test subjects consent to be filmed, in order for us to use this footage later to extract relevant data [41]. To find driver test subjects, we asked different people of different ages and experi- ences to participate. Some subjects were asked to participate again during a later iteration, to see whether their performance on the system had changed. 17 3. Methodology 3.5.2 Data gathering During the user tests, we mentioned that we were filming (qualitative data) and collecting SUS data (quantitative data) from the test subjects [41]. As an addition to this, we directly observed the participants regarding their actions and reactions, by taking notes and asking them questions [41]. We also prepared questionnaires that participants filled in after the user tests, as well as having short interviews prior and after, in order to gain insights not captured by the invitation forms, observations, and questionnaires [41]. 3.6 Tools During the design process, we used several tools to aid us in different ways. The graphical designs were created in Figma and Inkscape. Inkscape is a vector based drawing tool, used to create more detailed designs and figures, many of which are featured in this report. Figma is a digital prototyping tool, aimed more towards the interaction of a digital system, mainly for smart devices and personal computers. Figma was used to create larger concept illustrations which we used to create graphi- cal story boards, i.e. linking together several concept images to create a story, which showcases the features of the prototype. For all the programming of the high-fidelity prototype, Android Studios was used. Android Studios facilitates all the necessary tools to create an android app. Several of Google’s tools and services were used. Google Docs was used to write our common diary, collect information and documentation images, and prepare lists of data of various sorts. For the survey, we used Google Forms to send out to drivers. For the data received, we used Google sheets, which we used for calculating the various statistics, such as the chi-squared values. Additionally, scripting languages like Python was used to create simple calculation programs for the SUS scores. 18 4 Planning In this chapter, we will describe the overview of the different activities in the project and how much time was planned to be spent on them. The milestones set up before the project started are also presented here. Finally, a Gantt chart of the initial planning is also shown. For the most of the project, the planning was followed, so only minor things have been altered in the planning since the planning report. 4.1 Activities In Table 4.1, all the activities of the project are shown, as well as how many man-hours were planned to be dedicated towards them. As can be seen in the table, the project consisted of three phases: preparation, project work, and de- livery. Preparation activities include planning out the project, doing literature research, preparing user test invitation forms, attend stakeholder meetings (both SEER stakeholders and Chalmers University of Technology), as well as writing the planning report. During the project work phase, activities included user research, concept development, prototyping, and user testing. The delivery phase includes all non-project related activities required for the delivery, including the project report, presentations and oppositions. Most activities that were planned, were also performed during the project. The only exception is the invitation of participants activity, which was transformed into preparing a user survey for the user research instead. The user survey contained a section which asked for contact information for future user tests, but these were never used in the project. Instead, participants were found in the area around Lindholmen, Gothenburg, where the project was conducted. 4.2 Milestones The different milestones and deadlines for the project are listed in Table 4.2. Dead- lines are for activities related to the delivery of the reporting of our work, while milestones are related to the project work. The milestones for the concept iterations include everything from coming up with or improving the concept to the evaluations of the user tests of that iteration. While we did not meet the exact milestones for each iteration during the project, we did complete three iteration cycles, one for 19 4. Planning a low-fidelity prototype and two for the high-fidelity prototypes, as described in Chapter 5. 4.3 Timeline As mentioned, the project consist of three phases. The first phase, the preparation continued until 4 weeks after the project start date (January 22nd thru February 16th). The project work phase spanned for 13 weeks (February 12th thru May 11th), starting on the last week of the preparation phase. The last phase, the delivery, was partially overlapping the project work phase, spanning for 7 weeks (April 23rd thru June 8th). Originally, 2 weeks were inserted between the first hand-in of the project and the start of the demo preparation phase. Ultimately, these weeks were removed, since we could plan for an earlier demonstration date. To get a better overview of the phases, a Gantt chart has been made, shown in Figure 4.1. Activity Man-hours P re pa ra ti on Literature research 100 320 Planning report 120 Invite participants 80 Stakeholder meetings 20 P ro je ct w or k User research 160 880 Iterative design 480 Prototyping 120 User evaluation 120 D el iv er y Project report 320 400 Presentation and opposition 80 Total 1600 Table 4.1: Distribution of man-hours over the different activities in the project. 20 4. Planning Activity Milestone / Deadline Planning report February 16th Concept iteration 1 March 23rd Concept iteration 2 April 20th Concept iteration 3 May 11th First report hand-in May 25th Demonstration June 5th Final project report hand-in June 8th Table 4.2: Milestones and deadlines of the project. 21 4. Planning January February March April 22 23 24 25 26 27 28 29 30 31 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 1 2 3 4 5 6 7 8 Week 4 Week 5 Week 6 Week 7 Week 8 Week 9 Week 10 Week 11 Week 12 Week 13 Week 14 Preparation Planning report Literature Research Invite Project work User research Concept iteration 1 Concept iteration 2 Evaluate April May June 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 1 2 3 4 5 6 7 8 9 10 Week 15 Week 16 Week 17 Week 18 Week 19 Week 20 Week 21 Week 22 Week 23 Project work Concept iteration 2 Concept iteration 3 Evaluate Evaluate Project work complete Delivery Project report Project report hand-in Demo Demonstration Figure 4.1: Gantt planning chart of the project. 22 5 Design Process In this chapter, we will describe how the project was carried out and what design decisions was made along the way. The project consisted of several phases, in- cluding preparation and planning, user and market research, concept ideation and prototyping, and finally user testing and evaluation. The preparation phase of the project, described in Section 5.1, consisted mainly of doing literature research and gaining general knowledge from the automotive industry, specifically in the field of autonomous driving. In Section 5.2, we describe the process of collecting initial user expectations of in-vehicle HMIs, as well as current market solutions. After this, the creative process for finding concepts started, as described in Section 5.3. The concepts ideas taken from the ideation process were refined and implemented as working prototypes, as described in Section 5.4. Finally, in Section 5.5, we will go through the process of testing our low and high-fidelity prototypes, and using the feedback to improve and iterate over the concept. 5.1 Project preparation The initial phase of the project were dedicated mainly towards literature research in the domain of STs and semi-autonomous vehicles. Additionally, the structure of the project was planned during this phase as well, by specifying critical deadlines and milestones, as well as grouping the project into distinct phases. We started reading up on several different subjects regarding human cognition and driving behaviours, such as MWL, LOC and SA, as explained in Section 2. During the early stages of the project, we worked closely with our peer group when gathering relevant literature, as we at that point had yet to decide which contexts we would choose, namely either city driving or highway driving. As the literature research progressed, it was decided that we would develop a concept for city driving, while our peer group worked with highway driving. City driving and highway driving are different enough contexts to impose different challenges. As the research we read suggests, city driving is more likely to lead to MOL due to the stressful environment. In addition, there are likely to be many interruptions if the driver would be engaging in STs during city driving. In contrast, highway driving and long distance drives can instead lead to MUL, due to fatigue of monotonous driving. With this knowledge in mind, we got the insight that our solution for the concept should preferably be as simple as possible, minimising navigational (the HUD) and 23 5. Design Process manual (input methods) excises, while still remaining usable. To address the manual excise, we wanted the input method for the HMI to be as close as possible to the steering wheel, in order to prevent the user from moving there hands from the wheel frequently. This is especially necessary for traffic driving, due to its interruptibility nature. We also decided that the system we were to design should compliment the main infotainment system of the car, with our system being a tool for STs, with a focus on digital communication. 5.2 User research After the planning phase with the literature research of the project had concluded, we began to do the user research as a basis for the user requirements for the concept. Part of the derived user requirements was based on market research, by looking at existing car manufacturers, and partly based on our own survey, which we sent out to car and truck drivers. 5.2.1 Market research In order to get an initial starting point for our user requirements, we first set out to look at existing solutions on the market regarding navigation in infotainment systems and similar systems in the DIM. By looking at the interior designs from the web sites of various semi-autonomous cars, such as Tesla, Volvo, and BMW, we got a rough idea of what to expect in these types of cars. Since we were researching for the city driving environment, we did not put as much emphasis on researching semi-autonomous trucks, since their self-driving capability in cities are still limited. We did however drive a Volvo XC60, which had pilot assist (PA), in both light and heavy traffic environments, in order to get a better understanding of the current state of semi-autonomous cars in cities. To get an overview of how the navigation structure of an infotainment systems might look like, we performed an hierarchical task analysis (HTA) of the Volvo XC60 model that we drove. We could later use this analysis as a reference when implementing the information architecture of our concept. Ultimately, we wanted to have an architecture more concise than an infotainment system. A full description of the analysis can be seen in Appendix A. 5.2.2 Survey To compliment the market research, we prepared a survey to send out to car and truck drivers with different driving skills. The purpose of the survey was to get close feedback from the core target group, and to gain insights about what their goals might be with a new form of HMI for semi-autonomous motor vehicles. The survey was prepared using Google Forms, which supports the functionality to conditionally show questions depending on previous answers. This was appropriate for us, since we 24 5. Design Process had two different areas of interests (highway and city driving, respectively) between us and our peer group of the SEER project. Before sending out the survey to the relevant target group, we prepared a pilot survey, which we sent out to friends and colleagues, in order to ensure that the survey was properly written. 5.2.2.1 Preparing questions There were several different sections of related questions for the survey, with con- ditional paths that the survey could take depending on the answers. For the first section of the survey, we asked general questions about their background, such as age, nationality, and how long they have had a driver’s license. The second section asked questions about their driving habits, such as how often they drive, what type of vehicle they drive, and in what environments (city driving, highway driving, or both). The third section had questions specific to their experience with vehicles with advanced driver assistance system (ADAS). Depending on what they answer, the survey will take two paths. If they answer that they primarily drive a vehicle with ADASs, they will take a path with questions specific to ADAS. If they answer that they do not drive a semi-autonomous motor vehicle, then their path will have questions more open-ended questions regarding their assumptions or desires with a more autonomous vehicle. The flow of the survey can be more easily understood by looking at Figure 5.1. The figure shows what different sections (numbered from 1 to 17) the respondents will see, depending on what they answer. The questions in these sections are outlined in Appendix B. Figure 5.1: A flow chart representing the logical flow of the survey. Regardless of which path they take, they will end up in a section answering questions about the frequency and comfort of engaging in STs while driving, such as making phone calls, text messaging, reading email, etc. The survey was distributed through several forums on the Internet, mainly on Face- book groups administrated by truckers and semi-autonomous car enthusiast. This way, we were able to get responses from several different parts of the world. An overview of the 153 responses we got can be found in Appendix B. 25 5. Design Process 5.2.2.2 Analysing the responses After we had gotten enough responses, we exported the data into a spreadsheet, where we could better analyse potential correlations of some part of the data. We wanted to see if there were any connections between performing STs and any nominal data, such as whether they drive with ADAS or not, or if they drive regularly or occasionally. To compare nominal and ordinal data like this, we used the chi-squared method as described in Section 3.2. The chi-squared method gave use an statistical assurance regarding which STs are more prevalent in certain situations. We could then use these insights to construct user goals to use for the ideation phase of the project. A more detailed explanation of how the statistics were calculated can be found in Section 7.1. 5.3 Concept ideation The research phase of the project was for us to have a better understanding of the target group and their requirements. The insights we gained from the research was then used as a starting point when setting up for the development of the concept. The requirements were formalised into user goals and categorised into user roles, which we used for the brainstorming session. An example of one of the user roles we constructed is shown in Figure 5.2. In the user roles, we have collected the common responses from a subset of the target group, which in this example are casual car drivers using ADAS features. We summarised the responses into a paragraph describing the target group, and condensed them into concrete one or two word goals, shown in the green bubbles. The most commonly mentioned goal of the survey respondents, referred to as their main goal, are shown in dark green, while the less commonly mentioned goals, referred to as their secondary goals, are shown in light green. All the other roles created are collected in Appendix C. 5.3.1 Divergence: brainstorming We started to prepare for a brainstorm session by writing down all the survey respon- dents’ main and secondary end goals with secondary tasks in the motor vehicle on a whiteboard. This way we would always have their goals in mind when coming up with ideas. We also wrote down different existing technologies, which we would use as a base when coming up with different interaction cases for the brainstorm. Our goal was to come up with a couple of potential concepts, that we would determine to be most interesting for further investigation. During the brainstorm session, we wrote down the ideas on post-it notes and num- bered each note, to better keep track of the order the ideas were brought up. After the session, we sorted the ideas into two main categories: input, and output. Ideas that belong to the input category put an emphasis on how the user would interact and deliver information to the system, for instance touch surfaces with swiping ges- tures, gaze-and-speech, and tactile buttons like knobs and keyboards. Ideas in the 26 5. Design Process Figure 5.2: A user role constructed for casual car drivers using ADAS features. Symbols: [19]. output category instead emphasised on how to deliver information for the user, for instance displays, sounds, and haptic feedback. We took a couple of ideas from the brainstorming, which we believed could be improved upon and made into a full concept. The next step for these ideas was to see which of them violate common guidelines (mostly from NHTSA) for in-vehicle equipment, and then condense the ideas into around four main concepts to continue to iterate upon. 5.3.2 Transformation: choosing and refining ideas Previously, we had compiled all the relevant in-vehicle equipment guidelines into one cohesive document. This helped us identify which of our ideas had the least number of violations to these guidelines, and thus choose the most suitable ideas. By modifying or rejecting ideas in the early phase, we would save time and effort in the later development stages. The next step was to take the viable ideas and construct four rough concept ideas, which we will describe in the following sections, starting with the two input concepts and ending with the two output concepts. The concepts describe the general setup of the device, its different input and output 27 5. Design Process modes, as well as its advantages and disadvantages. This helped us get a rough overview of the different interactions that might be performed by the users. 5.3.2.1 Trackpad One of the early input concepts that we constructed, was the use of one or a pair of trackpads mounted on the steering wheel, as the main input for our system; see Figure 5.3 for the related ideas from the brainstorming session. This concept was inspired by the Steam Controller and the Steam Big Screen GUI, which is made for playing PC games without the need to use keyboard or mouse for navigation or moving the on-screen cursor. The trackpads are circular, touch sensitive areas, also having areas working as tactile buttons. This combination allows for both touch gestures, and the physical affordances of tactile buttons, to be blended into one input interface. Taking inspiration from this interaction method, we decided that the trackpad input interface would be our first concept that we would work further on. We had several ideas in mind as how to write text using the trackpads. One of the ideas was to use the trackpad as a drawing area to draw individual letters or words, which we referred to as Scribble. Another idea was also inspired by the Steam Controller, where each trackpad would control separate cursors of a split QWERTY keyboard, and have the user press down on the trackpads in order to type a character. Lastly, we also had an idea to adapt the T9 input method, commonly used in older mobile phones, into the layout of the trackpads. We refer to this idea as Circular T9. Instead of having the T9 buttons placed in a grid, they are instead placed in sectors, with the space button placed in the middle. We imagined the trackpads having some form of display on them as well, showing icons for the different functions available on them. The icons would change depend- ing on the context, that is, if the user navigates to a certain menu in the system, the relevant icons for the functions in that menu would appear. The trackpads have the advantage of being configurable to the users’ liking, whether it is to turn of certain functions, or flip the left and right trackpads, if two of them are present. One disadvantage may be that the interactions can be confusing to new users or occasional drivers, especially those who are generally not comfortable with new technology, such as smart phones. 5.3.2.2 Touch surface Another form of input method that was considered, was a touch surface placed in the middle of the steering wheel; see Figure 5.4 for the related ideas. As we imagined the surface to be relatively large, it could facilitate a lot of methods of input, theoretically even output in the shape of a touch-screen instead of just a surface. The input methods imagined for this interface included using a standard QWERTY keyboard layout, not to different from a smart tablet interface. Other input methods, 28 5. Design Process Figure 5.3: Ideas related to the trackpad concept. such as the Scribble idea, could also be used with this interface. The advantage of having a large tablet-like touch area, was that it allowed for a richer typing interface, using common writing interfaces like QWERTY, and thus preventing the users from learning an entirely new text input method. There were some problems with this concept, however, with one of them being that the placement of the surface could get in the way of airbag. Furthermore, the user would not have a steady and safe grip of the wheel when using it, and it may result in a unnatural position of the hand and arm. Figure 5.4: Ideas related to the touch surface concept. 29 5. Design Process 5.3.2.3 Secondary HUD One of the output concepts imaged, was to have a secondary HUD to use for enter- tainment and communications related tasks, while a main HUD (which is present in some cars) would have all the driving related information shown, such as the current speed of the vehicle. The secondary HUD was imagined to work as a compliment to the infotainment of the motor vehicle, showing condensed versions of the relevant apps; see Figure 5.5 for the related idea. The advantage of having a HUD is that it keeps the driver’s line of sight close to the road. It can also have a potentially large display to work with, allowing to shoe more information in one frame at a time. However, a disadvantage is that it may be difficult to show content when having direct sunlight on the windshield, as well as having enough contrast compared to the road to show information. It can also occlude parts of the road, which is not desirable in highly congested traffic areas such as cities. Figure 5.5: Ideas related to the secondary HUD concept. 5.3.2.4 Steering wheel display strip The other output concept we constructed, was to have a thin display strip on the top of the steering wheel; see Figure 5.6 for the related idea. This display strip was imagined to be a small LCD-display, which could be used to display messages, simple menus and notifications. One benefit of having a display at the top of the steering wheel, is that it closer to the driver’s line of sight to the road, which is compliant with the NHTSA guidelines of a maximum downwards viewing angle of 30°. However, since the screen real estate is fairly limited, the information resolution, i.e. the amount of information fitted in one screen at a time, goes down as well. Furthermore, there is also a question about the cost of having a display on top of the steering wheel. 30 5. Design Process Figure 5.6: Ideas related to the display strip concept. 5.3.2.5 Scenario mapping When we had our four concepts, we looked at some driving situations and inves- tigated the interactions of our concepts more deeply, by using scenario mapping. Scenario mapping was used, so that we could put ourselves into the scenario of the user and detect potential problems and provide solutions for them. Scenario map- ping is useful to find out what the users can and probably will do using a particular system. The insights gained from the scenario mapping were used to choose one input concept and one output concept to use for the concept development phase. Insights from the scenario mapping were brought into the design of the interactions, when making the digital storyboards and low-fidelity prototypes of the concepts. When we performed the scenario mapping, we divided it into four different scenarios, using one of the permutations of one input and output concept, respectively: 1. Input: Trackpad, Output: Steering wheel strip display 2. Input: Trackpad, Output: Secondary HUD 3. Input: Touch surface, Output: Steering wheel strip display 4. Input: Touch surface, Output: Secondary HUD One of these scenarios can be seen in Figure 5.7, where we perform the mapping on a professional car driver, who is not using any ADAS features of the car. The scenario is that the driver should read a text message from a colleague, using a system with the trackpad and display strip concepts. 5.3.2.6 Evaluation of the initial concepts Having specified four potential concepts, it was then time to visualise them in differ- ent forms. We put an emphasis on drawing low-fidelity illustrations in the beginning, 31 5. Design Process Figure 5.7: Scenario mapping allowed us to go through our ideas and discover how they work in different scenarios. to show to each other the look and feel of our respective mental models of the con- cepts; an example can be seen in Figure 5.8. It also helped us find and solve certain design quirks that we had not thought about earlier. For instance, the size and placement of the trackpads were altered after early prototyping, as well an addition of physical buttons to satisfy some needs we did not realise that we had before. With our mental models synced, and an agreed upon conceptual model developed for each of the four concepts, we began an early evaluation. The purpose for this evaluation was to process the concepts one more time and decide if we wanted to take them further to the next phase, to make actual physical and tangible representations of the concepts, or discard them all together. Each of the concepts was evaluated on their assumed strengths and weaknesses. The result of the evaluation process resulted in us discarding two out of the four initial concepts. Our reasoning was that we only needed one input and one output method respectively, and the discarded methods had to many quirks that made them inferior to the two other methods. Our main goal was to ensure comfort, safety and efficiency and the discarded methods lacked in most of these areas. The touch surface input had some promising properties. But there were several problems with the concept that we initially thought that we could find a solution for. The problems however, were not that easy to solve. The placement of the 32 5. Design Process Figure 5.8: Early concept of the trackpad idea. touch surface was intended to be in the centre of the steering wheel, which we at the time thought was an under utilised area which we could use to increase efficiency and comfort when interacting with the infotainment system. This was all highly theoretical, but we realised that the placement of the touch surface could have the reversed effect. To use the surface, the user have to remove one hand from the steering wheel, which in our case with city driving was not the safest option in comparison to the trackpads, which enabled the user to always have both hands on the steering wheel and still be able to use the system. We evaluated that it was less comfortable as well, as the user had to have the arm in a position which was straining for the user, as there is no arm rest, which would be a problem when performing prolonged tasks. Even if we could find ways to solve these issues, there was a problem with the construction, as the centre of the steering wheel is where the airbag is placed in almost every car. Perhaps this was an issue that could have been solved, but the other problems with the method, and the realisation that the trackpads was in almost every way a far more interesting idea to the touch surface, resulted in the decision to discard the concept entirely. A similar argument for the display strip on the steering wheel was made, and was also discarded in favour of a more appropriate design. In theory, the idea of having a display on the top of the steering wheel was intriguing, but there were several limitations to this design, however. For instance, when turning the steering wheel, it would be increasingly difficult to use the display, as it will follow the motion of the steering wheel, and thus making it cumbersome and potentially dangerous to use. 33 5. Design Process It is also a potentially expensive solution, which might make the trade-off between cost and efficiency not worthwhile. The size of the display would also be a limit, if it was to be implemented it would not pass as a primary display, perhaps in the form of a support display instead. This would make the the display strip complementary and perhaps redundant by the nature of its design, i.e. there are other displays in the car that could achieve that the display strip can and even more. With both the display strip and the touch surface discarded, we decided to take the trackpad and the secondary HUD concepts to the next step in our design process, which was the low-fidelity prototyping. We decided to make both tangible and digital representations of our concepts to make them as clear as possible to our stakeholders. 5.3.3 Convergence: low-fidelity prototyping Having agreed on what concepts to continue to work on, we began planning for the low-fidelity prototype, which would be used to get an initial look and feel of the concept, which would make it easier to show and test with test subjects later on for the user tests. We also began thinking about the navigation structure of the HUD, with some of the sketches drawn for it shown in Figure 5.9. Figure 5.9: Some sketches for different navigation structures of the HUD, before making the first digital prototype. 5.3.3.1 Prototyping We began by taking the real measurements of a typical steering wheel (in our case a standard Volvo steering wheel), and created a 1:1 template. The template was then cut out of Styrofoam, and polished by adding a layer of black paper on top, to make it smoother to hold and have a unified look. This wheel prototype served as 34 5. Design Process the basis when we measured where to put the trackpads. Once we decided where to put the trackpads, we created paper representations and put them on the steering wheel prototype. The prototype is shown in Figure 5.10. Figure 5.10: The tangible prototype of a steering wheel with the trackpads. Sym- bols: [54]. With the trackpads at the desired position, we realised that there were some issues with the comfort and usability of the current placement of the trackpads when AS is active. The placement of the trackpads had the potential to be tiring for the driver, as when the car is taking control over the steering wheel. Furthermore, it might disable the semi-autonomous driving mode by accident, if the driver is firmly holding on to the steering wheel while using the trackpads. As the trackpads positions were initially persistent, it could become cumbersome when the steering wheel turns by itself and the driver had to adjust their grip to accommodate for this. One possible solution to this, was to make the placement of the trackpads independent of the motion of the steering wheel. But there were some issues with this idea, as the construction could be rather complex and interfere with critical systems of the car, for instance the airbag. Nevertheless, a concept was made as a preliminary fix for this issue. The conceptualisation of the new construction concept was created in Figma, as can be seen in Figure 5.11. With the new concept, the driver can adjust the positioning of the trackpads into a more comfortable position depending on their current task. If the users is in semi-autonomous driving mode, they might not want to have their 35 5. Design Process hands on the steering wheel in the standard quarter to three, but instead in a more relaxed state where the user have their hands lowered and resting their arms on the armrest or similar comfortable positions. Figure 5.11: Digital prototypes showing the different installations of the modular trackpads. The left illustrations show an inner rim construction for the trackpads, while the right illustrations show an outer rim construction. The upper illustrations show a quarter past 9 formation of the trackpads used for manual driving, while the lower illustrations show an angled formation of the trackpads used for autonomous driving. In theory, the installation concept worked well, but in practice it seemed like an engineering task which was too much out of scope for this project, as the construction would potentially be complex. Additionally, it might be very expensive, and interfere with the standards of how steering wheels in cars should look and behave. Therefore, we decided to move on with the trackpads in their original positions, without the additional construction of the modular placement. With the help of Figma, we created a detailed digital representation of the trackpad concept, as can be seen in Figure 5.12. We decided to create a storyboard of the trackpad concept, and describe it using different use cases. For instance, one of these use cases might be: “if the user would like to increase or decrease the volume of our system, how would they do it?”. Since the input interaction with the trackpads are unusual to have in a car, we put an emphasis on how these interactions would 36 5. Design Process behave. The output of the system, the secondary HUD is more straightforward when it comes to placement and the technology, as they have been a part of the automotive industry for a while. For us, it was a matter of coming up with a design which seamlessly integrates the trackpads with a system of entertainment and communication applications. As a simple example, if the user scrolls anticlockwise on the trackpad, the menu in the HUD would preferably scroll anticlockwise as well to mimic interaction of the input. Figure 5.12: One transition in the digital storyboard, which shows the interactions of the input (trackpads), as well as the response and information visualisation of the output (HUD). The first frame shows the state of the system before scrolling anticlockwise, and the second shows the state after this action. Images and symbols: [11, 12, 13, 14, 22, 28, 29, 30, 33, 34, 35]. During this phase, we discovered that we were missing out some essential features in our conceptual system. One of those things that we missed was that there were no back method. That means that if the user was navigating the text messages, there were no obvious way in how the user would go back to the previous menu. There were other issues as well, for instance there were no obvious way of putting the system into sleep, or quick access to some of the essential functions, like cruise control (CC) or media control, which are commonly present on steering wheels in cars. The 37 5. Design Process solution for this was to add a set of buttons on the inner side (towards the centre of the steering wheel) beside each trackpad. The buttons was intended to be tactile, and in that way the user would always know what to expect when pressing them, since they only have one function associated to them. In comparison, trackpads are contextual, which means that they change functionality depending on where in the system they are. 5.3.3.2 Concept walkthrough After the addition of the buttons on the side of the trackpad, the functionality stayed the same throughout the project, which is why we will describe the specifics of the trackpads in this section. However, since the trackpads are contextual, the functions will change depending on the mode. These modes are more specific to the high-fidelity prototypes, so they will be explained in more detail there instead. For now, we will describe the side buttons and the standby mode functions of the trackpad, which are shown in Figure 5.13. Figure 5.13: The digital storyboard showing the standby mode, with the side buttons and trackpads shown in the bottom. Images and symbols: [11, 12, 13, 22, 31, 32, 33, 34, 35, 36, 37, 38]. As mentioned before, the two pairs of side buttons positioned in between the track- pads, are tactile buttons and have the same function associated to them regardless of mode. On the left side, the upper button is unspecified and reserved for any arbitrary car specific function, and is not used in any of the prototypes. The lower button is the voice dictation button, which is an input method which allows the driver to dictate any message to write using our system. However, since this is a deliberate delimitation of the project, we haven’t defined or implemented any behaviour for it in the system. 38 5. Design Process On the right side, the upper button is the hibernation button, which is used to put the system into standby mode. When the system is in standby mode, the HUD is disabled and the contextual buttons on the trackpads change to the default, which is the media controls shown in the figure. Whenever this button is pressed, the mode of the system is toggled between standby and the last mode before going into standby. The lower button is the back button, which is used to go back to a previous mode in the system. If the user keeps the button pressed for a couple of seconds, the system will go back to the home screen instead. However, the latter was not implemented later in neither the low-fidelity nor the high-fidelity prototypes, respectively. The left trackpad will mostly show the car specific controls shown in the figure. For some iterations of the different prototypes, however, this may temporarily be replaced with controls for writing a message, which is explained in the corresponding prototype section. In the figure, several icons can be seen placed around the trackpad area. In general, there is always a slot for an icon in the middle, as well as two, four or eight sectors distributed around it. In the case for the left trackpad in the standby mode, the middle button is active, with one button above and one button below. The middle button is a button for activating the ADAS of the care, which in our concept meant the ACC and AS systems. The upper and lower buttons of the left trackpad is f