Intelligent transportation systems A collision avoidance system incorporating probabilistic threat- assessment and decision-making protocols Master’s Thesis within the division of Systems, Control and Mechatronics CHARLIE SJÖDIN ISABELLA JOHANSSON Department of Signals and Systems (S2) CHALMERS UNIVERSITY OF TECHNOLOGY, Gothenburg, Sweden EX075/2015 Master’s thesis 2015:01 Intelligent transportation systems A collision avoidance system incorporating probabilistic threat-assessment and decision-making protocols CHARLIE SJÖDIN ISABELLA JOHANSSON Department of Signal and Systems (S2) Chalmers University of Technology Gothenburg, Sweden 2015 Intelligent transportation systems A collision avoidance system incorporating probabilistic threat-assessment and decision- making protocols CHARLIE SJÖDIN ISABELLA JOHANSSON Supervisor: Gabriel Rodrigues de Campos, Department of Signals and Systems; Klas Alenljung, DENSO Sales Sweden AB Examiner: Paolo Falcone, Department of Signals and Systems Master’s Thesis 2015:01 Department of Signals and Systems (S2) Division of Systems, Control and Mechatronics Chalmers University of Technology SE-412 96 Gothenburg Telephone +46 31 772 1000 iv Intelligent transportation systems A collision avoidance system incorporating probabilistic threat-assessment and decision- making protocols CHARLIE SJÖDIN ISABELLA JOHANSSON Department of Signals and Systems (S2) Chalmers University of Technology Abstract The scope of this thesis has been on developing a Collision Avoidance (CA) system being able to cope with the accident-prone urban intersections. These types of complex sections within the road network will have the need to account for both the different road-users but also the topology. This thesis will however constrain the problem to only consider the EGO vehicle and one Perceived Other Vehicle (POV), hence limiting the scope. The proposed CA-system is divided into three modular blocks for the ability of individual future developments. The first block captures the states and predicts the future paths of the accounted vehicles with the aid of the observed road geometry. Each vehicle’s state is computed using an Interacting Multiple Model (IMM) filter that matches a motion model to the probable manoeuvre, whereas the topology is ob- served and accounted for by a Bayesian Network (BN). The information given from both the IMM and BN are combined for the ability to accurately predict, through an Unscented Kalman Filter (UKF), the future behaviour of each vehicle. At each prediction step, a probability of an imminent collision will, by the second block, be calculated using the joint cumulative distribution. If this probability reaches above a certain threshold, the final and third block will be invoked to evaluate the detected threat for the need of a collision avoidance intervention. The evaluation will be made through a formal threat assessment method based on reachability tools, with the aim of finding the Point-of-No-Return stating which point in time the vehicle inevitable ends up in a collision. The only considered intervention method is by the use of emergency braking (AEB) to be able to keep the driver in the loop as the decision maker as long as possible. The developed CA-system was evaluated through several different scenarios in an urban intersection spanning the possible configurations that could occur. The unfolded result revealed a robust system being able to sufficiently predict the future paths of both accounted vehicles for the ability to detect a probable collision. When the detected collision was evaluated to be unavoidable by the driver, the collision avoidance system triggered an emergency brake intervention being able to prevent or at least mitigate the collision. Keywords: Threat Assessment, Decision-making, Collision Avoidance, Active Safety, Bayesian Network, Interactive Multiple Model Filtering, Unscented Kalman Filter, Driver Intention, Urban Intersections. v Acknowledgements We would like to sincerely thank our supervisor at Chalmers University of Technol- ogy, Gabriel Rodrigues de Campos and our supervisor at DENSO Sales Sweden AB, Klas Alenljung for their guidance and support throughout the project. We would also like to give a special thanks to both Johan Degerman and Tommy Johansson at DENSO for their support and passionate involvement. Our examiner Paolo Falcone also deserves our gratitude for the guidance and helpful meetings. We are also thankful for the support we have got from DENSO’s ac- tive safety department at whole for their helpfulness, support and making us feel welcome. A thanks is also directed to Diogo Silva for the support with mathematical formulations and feedback on the report. Our opponents Herman Fransson, Malin Karlsson and Tomas Nilsson also deserve to be acknowledged for their helpful review of our work. Finally, we would like to thank our family and friends for the support both through the good times and the bad. Because folks, this has been an adventure that we will never forget. Charlie Sjödin and Isabella Johansson, Gothenburg, December 2015 vii Contents Acronyms xii Variable Notations xiii List of Figures xv 1 Introduction 1 1.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.3 General objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.4 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.5 System overview of the proposed solution . . . . . . . . . . . . . . . . 3 1.6 Thesis organisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2 Preliminaries 6 2.1 Problem description . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.2 Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.3 Vehicle model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.4 Environmental representation . . . . . . . . . . . . . . . . . . . . . . 9 2.5 Evaluation setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 3 Long-term path prediction 12 3.1 Related work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 3.2 Bayesian Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 3.2.1 General description . . . . . . . . . . . . . . . . . . . . . . . . 15 3.2.2 Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 3.2.3 Proposed Bayesian Network algorithm . . . . . . . . . . . . . 23 3.3 Interacting Multiple Model (IMM) filter . . . . . . . . . . . . . . . . 24 3.3.1 Mixing probability update and mixing . . . . . . . . . . . . . 26 3.3.2 Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 3.3.3 Model probability update . . . . . . . . . . . . . . . . . . . . 27 3.3.4 Output estimate calculation . . . . . . . . . . . . . . . . . . . 28 3.3.5 Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 3.3.6 Proposed IMM algorithm . . . . . . . . . . . . . . . . . . . . . 29 3.4 Trajectory prediction using UKF . . . . . . . . . . . . . . . . . . . . 30 3.4.1 Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 3.4.2 Proposed trajectory prediction algorithm . . . . . . . . . . . . 32 ix Contents 3.5 Long-Term Path Prediction Procedure . . . . . . . . . . . . . . . . . 32 3.5.1 Long-term Path Prediction Algorithm . . . . . . . . . . . . . . 34 4 Collision detection 35 4.1 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 4.2 Collision Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 4.3 Deterministic Collision Detection . . . . . . . . . . . . . . . . . . . . 39 4.4 Probabilistic Collision Detection . . . . . . . . . . . . . . . . . . . . . 40 4.4.1 Bivariate Normal Distribution . . . . . . . . . . . . . . . . . . 40 4.4.2 Joint Cumulative Distribution . . . . . . . . . . . . . . . . . . 41 4.5 Collision Detection Procedure . . . . . . . . . . . . . . . . . . . . . . 42 4.5.1 Predicted states . . . . . . . . . . . . . . . . . . . . . . . . . . 42 4.5.2 Characteristics of a collision . . . . . . . . . . . . . . . . . . . 43 4.5.3 Collision Detection Algorithm . . . . . . . . . . . . . . . . . . 45 5 Collision avoidance 47 5.1 Related work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 5.2 Collision avoidance definition . . . . . . . . . . . . . . . . . . . . . . 49 5.2.1 Robust controllable sets . . . . . . . . . . . . . . . . . . . . . 49 5.2.2 Derivation of the attraction sets . . . . . . . . . . . . . . . . . 51 5.2.3 Formal definition of the collision avoidance constraints . . . . 52 5.2.4 Integration of uncertainties . . . . . . . . . . . . . . . . . . . . 54 5.3 Collision Avoidance Procedure . . . . . . . . . . . . . . . . . . . . . . 57 5.3.1 Collision Avoidance Algorithm . . . . . . . . . . . . . . . . . . 57 6 Results and analysis 59 6.1 System parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 6.2 System performance . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 6.2.1 Non-evasive manoeuvres . . . . . . . . . . . . . . . . . . . . . 63 6.2.2 Evasive manoeuvres . . . . . . . . . . . . . . . . . . . . . . . 66 6.3 Robustness analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 6.3.1 Critical velocity limits . . . . . . . . . . . . . . . . . . . . . . 81 6.3.2 Noise robustness . . . . . . . . . . . . . . . . . . . . . . . . . 89 6.3.3 Long-term path prediction . . . . . . . . . . . . . . . . . . . . 92 6.3.4 Collision detection . . . . . . . . . . . . . . . . . . . . . . . . 99 6.3.5 Collision avoidance . . . . . . . . . . . . . . . . . . . . . . . . 101 7 Discussion and concluding remarks 102 7.1 System design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 7.1.1 Long-term path prediction . . . . . . . . . . . . . . . . . . . . 102 7.1.2 Collision detection . . . . . . . . . . . . . . . . . . . . . . . . 105 7.1.3 Collision avoidance . . . . . . . . . . . . . . . . . . . . . . . . 106 7.2 System application . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 7.3 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 7.4 Concluding remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 References 109 x Contents A System parameters I A.1 Time parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . I A.2 Thresholds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . II A.3 Vehicle parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . II A.4 Uncertainty- and disturbance parameters . . . . . . . . . . . . . . . . II B System performance V B.1 Non-evasive manoeuvres . . . . . . . . . . . . . . . . . . . . . . . . . V B.1.1 Avoidance manoeuvre: . . . . . . . . . . . . . . . . . . . . . . V B.1.2 Abandoned turn with 10 predictions . . . . . . . . . . . . . . VII B.2 Evasive manoeuvres . . . . . . . . . . . . . . . . . . . . . . . . . . . . VIII B.2.1 Stationary obstacle: . . . . . . . . . . . . . . . . . . . . . . . . VIII B.2.2 Left turn by the POV across the EGO’s path . . . . . . . . . XII B.2.3 Real tests of left turn across path by either EGO or POV . . . XVI B.2.4 Attempted turn into the same lane: . . . . . . . . . . . . . . . XIX B.2.5 Road merging scenario: . . . . . . . . . . . . . . . . . . . . . . XXIX C System robustness XXXII C.1 Performance comparison for the crossing intersection scenario . . . . XXXII D Test cases for evaluation of the collision detection algorithm XXXIV D.1 Apart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . XXXIV D.2 Next to eachother . . . . . . . . . . . . . . . . . . . . . . . . . . . . . XXXV D.3 Overlapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . XXXVI D.4 EGO into POV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . XXXVII D.5 POV into EGO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . XXXVIII D.6 EGO turn, POV straight . . . . . . . . . . . . . . . . . . . . . . . . . XXXIX D.7 EGO and POV turn . . . . . . . . . . . . . . . . . . . . . . . . . . . XL xi Acronyms AEB - Autonomous Emergency Braking BN - Bayesian Network CA - Collision Avoidance CAcc - Constant Acceleration (model) BND - Bivariate Normal Distribution CCA - Constant Curvature and Acceleration (model) CPT - Conditional Probability Table CT - Constant Turn-rate (model) CTRA - Constant Turn Rate and Acceleration (model) CTRV - Constant Turn Rate and Velocity (model) CV - Constant Velocity (model) DAG - Directed Acyclic Graph DTC - Distance To Collision DM - Decision Making (algorithm) EGO - Ego vehicle (the vehicle equipped with the collision avoidance system) EKF - Extended Kalman Filter IMM - Interacting Multiple Model JCDF - Joint Cumulative Distribution Function LTAP - Left Turn Across Path PNR - Point of No Return POV - Perceived Other Vehicle (all vehicles except the EGO) UKF - Unscented Kalman Filter TA - Threat Assessment TTC - Time To Collision ∆TTC - Collision Time Interval V2I - Vehicle-to-Infrastructure (communication) V2V - Vehicle-to-Vehicle (communication) V2X - Vehicle-to-X (V2V, V2I etc.) xii Variable Notations Notation Description Definition xEGO,xPOV State vector EGO resp. POV Eq. 2.1 ∆T Sample time k Discrete time step PLeft Probability of going left Eq. 3.13 PRight Probability of going right Eq. 3.14 PStraight Probability of going straight Eq. 3.15 Π Transition matrix Eq. 3.16 mo Indicated motion model Sec. 3.3 N No. of models used Sec. 3.3 x Motion model Eq. 3.17 ẑ Measurement model Eq. 3.17 z Observed measurement Sec. 3.3 q Process noise Eq. 3.17 ε Measurement noise Eq. 3.17 fmo Motion function dependent on model Eq. 3.17 hmo Measurement function dependent on model Eq. 3.17 Amo, Bmo, Cmo, Dmo Matrices dependent on model Eq. 3.18 Qmo Process noise covariance matrix Sec. 3.3 R Measurement noise covariance matrix Sec. 3.3 µ̄ Previous mixing probability Sec. 3.3 µ̂ Updated mixing probability Sec. 3.3 x̄ Previous state x̂ Updated state x̌ IMM state output P̄ Previous covariance P̂ Updated covariance P̌ IMM covariance output S Innovation covariance Eq. 3.25 K Kalman gain Eq. 3.26 Λ Likelihood of each updated filter model Eq. 3.29 i Prediction time steps npredictions No. of prediction steps χ Sigma points Sec. 3.4 xiii Variable Notations Notation Description Definition n No. of states Sec. 3.4 x̃ Augmented state mean Sec. 3.4 P̃ Augmented state covariance Sec. 3.4 Qpred Process noise covariance matrix in UKF Sec. 3.4 λ Scaling parameter Eq. 3.36 α, κ Parameters determining the spread of χ Sec. 3.4 β Prior information parameter Sec. 3.4 W Weights in UKF Sec. 3.4 vpredictions Predicted velocity l, w Length and width of EGO Chap. 4 l̄, w̄ Length and width of POV Chap. 4 η No. of points spread out over vehicles Chap. 4 j Point used Chap. 4 Pcollision Probability of collision Eq. 4.7 φ Joint density function Eq. 4.9 Dms Squared Mahalanobis distance Eq. 4.10 mE Mean of estimated position EGO Chap. 4 ΣE Covariance of estimated position EGO Chap. 4 mO Mean of estimated position POV Chap. 4 ΣO Covariance of estimated position POV Chap. 4 mEpj Mean of point j for the EGO Chap. 4 mOpj Mean of point j for the POV Chap. 4 tC Time to collision ∆tC Collision time interval dC Distance to collision σ2 dC DTC variance Danger Boolean value for the collision risk TCD Threshold for collision detection tstop Stop time (reaction time + brake time) vmax Maximum allowable velocity Chap. 5 xCA State for CA Eq. 5.5 u Control input Chap. 5 S Target set Eq. 5.6 X Constraints on state Eq. 5.7 U Constraints on input Eq. 5.8 −amax Maximum deceleration Chap. 5 dmax Maximum travel distance Chap. 5 A Attraction set Eq. 5.9 L Last attraction set Chap. 5 tlimit Time limit to avoid collision Chap. 5 d̂C Estimated DTC Eq. 5.13 PBelong Prob. of belonging to A Eq. 5.14 p No. of points uniformly distributed Chap. 5 TCA Threshold for collision avoidance Brake Brake signal Chap. 5 xiv List of Figures 1.1 Active safety research . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2 Overview of the three objectives . . . . . . . . . . . . . . . . . . . . . 4 1.3 Overview of the information flow through the system . . . . . . . . . 4 2.1 Mapping of data between different coordinate systems . . . . . . . . . 10 3.1 Overview of the Long-term path prediction subsystem . . . . . . . . . 13 3.2 Overview of the connection between BN, IMM and UKF . . . . . . . 14 3.3 General example of DAG for a BN . . . . . . . . . . . . . . . . . . . 16 3.4 DAG for the proposed BN . . . . . . . . . . . . . . . . . . . . . . . . 17 3.5 Real data collection where the intersection is driven north to south . 18 3.6 Real data collection where the intersection is driven south to north . 18 3.7 Profile for acceleration . . . . . . . . . . . . . . . . . . . . . . . . . . 20 3.8 Profile for lateral alignment. . . . . . . . . . . . . . . . . . . . . . . . 21 3.9 Profile for angular velocity. . . . . . . . . . . . . . . . . . . . . . . . . 22 3.10 Overview of the IMM-filter . . . . . . . . . . . . . . . . . . . . . . . . 25 3.11 Overview of the Long-term path prediction subsystem . . . . . . . . . 33 4.1 Overview of the collision detection subsystem . . . . . . . . . . . . . 35 4.2 Illustration of the sets representing the areas . . . . . . . . . . . . . . 37 4.3 Illustration of vehicle area, represented by sets, in global frame . . . . 38 4.4 Illustration of interpretation of a traffic scenario . . . . . . . . . . . . 38 4.5 Illustration of the definition of TTC and ∆TTC . . . . . . . . . . . . 43 5.1 Overview of the collision avoidance subsystem . . . . . . . . . . . . . 48 5.2 One-step Robust Controllable set . . . . . . . . . . . . . . . . . . . . 50 5.3 Definition of target and attraction sets . . . . . . . . . . . . . . . . . 52 5.4 Illustration of clear and colliding path . . . . . . . . . . . . . . . . . . 53 5.5 Attraction set comparison . . . . . . . . . . . . . . . . . . . . . . . . 54 5.6 Definition of incorporated uncertainties . . . . . . . . . . . . . . . . . 56 5.7 Normal distribution divided by standard deviations . . . . . . . . . . 56 6.1 Explanation of the optimal braking profile . . . . . . . . . . . . . . . 60 6.2 Example of colliding trajectory in DTC and TTC plots . . . . . . . . 61 6.3 Flowchart of the system . . . . . . . . . . . . . . . . . . . . . . . . . 62 6.4 Illustration of scenario for Abandoned turn . . . . . . . . . . . . . . . 63 6.5 BN probabilities and IMM weights for the different models . . . . . . 64 6.6 Probability of collision . . . . . . . . . . . . . . . . . . . . . . . . . . 65 xv List of Figures 6.7 Illustration of the intervention timing . . . . . . . . . . . . . . . . . . 65 6.8 Illustration of scenario for Crossing intersection . . . . . . . . . . . . 66 6.9 BN probabilities and IMM weights for the different models . . . . . . 67 6.10 Probability of collision . . . . . . . . . . . . . . . . . . . . . . . . . . 68 6.11 Illustration of the intervention timing . . . . . . . . . . . . . . . . . . 69 6.12 DTC and TTC dependent on time . . . . . . . . . . . . . . . . . . . 69 6.13 Illustration of scenario for Left turn by the POV across the EGO’s path 70 6.14 BN probabilities and IMM weights for the EGO’s different models . . 71 6.15 BN probabilities and IMM weights for the POV’s different models . . 72 6.16 Probability of collision . . . . . . . . . . . . . . . . . . . . . . . . . . 73 6.17 Illustration of the intervention timing . . . . . . . . . . . . . . . . . . 74 6.18 DTC and TTC dependent on time . . . . . . . . . . . . . . . . . . . 75 6.19 Illustration of scenario for Left turn by the EGO across the POV’s path 76 6.20 BN probabilities and IMM weights for the different models . . . . . . 77 6.21 Probability of collision . . . . . . . . . . . . . . . . . . . . . . . . . . 78 6.22 Illustration of the intervention timing . . . . . . . . . . . . . . . . . . 79 6.23 DTC and TTC dependent on time . . . . . . . . . . . . . . . . . . . 80 6.24 Low velocity comparison for Abandoned turn scenario . . . . . . . . . 82 6.25 Regular velocity comparison for Abandoned turn scenario . . . . . . . 82 6.26 High velocity comparison for Abandoned turn scenario . . . . . . . . . 83 6.27 Low velocity comparison for Crossing intersection scenario . . . . . . 83 6.28 Regular velocity comparison for Crossing intersection scenario . . . . 84 6.29 High velocity comparison for Crossing intersection scenario . . . . . . 84 6.30 Low velocity comparison for LTAP by EGO scenario . . . . . . . . . 85 6.31 Regular velocity comparison for LTAP by EGO scenario . . . . . . . 86 6.32 high velocity comparison for LTAP by EGO scenario . . . . . . . . . 86 6.33 Low velocity comparison for LTAP by POV scenario . . . . . . . . . 87 6.34 Regular velocity comparison for LTAP by POV scenario . . . . . . . 87 6.35 High velocity comparison for LTAP by POV scenario . . . . . . . . . 88 6.36 ROC-curve for Negative case 1 . . . . . . . . . . . . . . . . . . . . . . 90 6.37 ROC-curve for Negative case 2 . . . . . . . . . . . . . . . . . . . . . . 91 6.38 Subsystem evaluation for the Abandoned turn scenario . . . . . . . . 92 6.39 Each subsystem part’s detection rate influence for the Abandoned turn scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 6.40 Subsystem evaluation for the LTAP EGO scenario . . . . . . . . . . . 94 6.41 Each subsystem part’s detection rate influence for the LTAP EGO scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 6.42 Subsystem evaluation for the LTAP POV scenario . . . . . . . . . . . 95 6.43 Each subsystem part’s detection rate influence for the LTAP POV scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 6.44 Collision detection and avoidance with wrongfully given evidences . . 96 6.45 BN and IMM for a comparison with wrongfully given evidences . . . 97 6.46 Overview of the IMM performance for a left pathway . . . . . . . . . 98 6.47 Illustration of the IMM’s internal performance for a straight pathway 98 7.1 Illustration of too long prediction horizon giving unwanted result . . . 105 xvi List of Figures 7.2 Illustration of a factor giving a well-performed system even with faulty predicted trajectory . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 A.1 Illustration on purpose of covariance matrix . . . . . . . . . . . . . . III B.1 Illustration of scenario for Avoidance manoeuvre . . . . . . . . . . . . V B.2 BN probabilities and IMM weights for the different models . . . . . . VI B.3 Probability of collision . . . . . . . . . . . . . . . . . . . . . . . . . . VII B.4 Probability of collision . . . . . . . . . . . . . . . . . . . . . . . . . . VII B.5 Illustration of the intervention timing . . . . . . . . . . . . . . . . . . VIII B.6 Illustration of scenario for Stationary obstacle . . . . . . . . . . . . . VIII B.7 BN probabilities and IMM weights for the different models . . . . . . IX B.8 Probability of collision . . . . . . . . . . . . . . . . . . . . . . . . . . X B.9 Illustration of the intervention timing . . . . . . . . . . . . . . . . . . XI B.10 DTC and TTC dependent on time . . . . . . . . . . . . . . . . . . . XII B.11 Illustration of scenario for POV making an aggressive left turn . . . . XII B.12 BN probabilities and IMM weights for the different models . . . . . . XIII B.13 Probability of collision . . . . . . . . . . . . . . . . . . . . . . . . . . XIV B.14 Illustration of the intervention timing . . . . . . . . . . . . . . . . . . XIV B.15 DTC and TTC dependent on time . . . . . . . . . . . . . . . . . . . XV B.16 BN probabilities and IMM weights for the different models . . . . . . XVI B.17 Probability of collision . . . . . . . . . . . . . . . . . . . . . . . . . . XVII B.18 Illustration of the intervention timing . . . . . . . . . . . . . . . . . . XVIII B.19 Illustration of scenario for Attempted turn into the same lane . . . . . XIX B.20 BN probabilities and IMM weights for the different models . . . . . . XX B.21 Probability of collision . . . . . . . . . . . . . . . . . . . . . . . . . . XXI B.22 Illustration of the intervention timing . . . . . . . . . . . . . . . . . . XXI B.23 DTC and TTC dependent on time . . . . . . . . . . . . . . . . . . . XXII B.24 Illustration of scenario for EGO turn and POV going straight . . . . . XXII B.25 BN probabilities and IMM weights for the different models . . . . . . XXIII B.26 Probability of collision . . . . . . . . . . . . . . . . . . . . . . . . . . XXIV B.27 Illustration of the intervention timing . . . . . . . . . . . . . . . . . . XXIV B.28 DTC and TTC dependent on time . . . . . . . . . . . . . . . . . . . XXV B.29 Illustration of scenario for POV going right and EGO going straight . XXV B.30 BN probabilities and IMM weights for the different models . . . . . . XXVI B.31 Probability of collision . . . . . . . . . . . . . . . . . . . . . . . . . . XXVII B.32 Illustration of the intervention timing . . . . . . . . . . . . . . . . . . XXVII B.33 DTC and TTC dependent on time . . . . . . . . . . . . . . . . . . . XXVIII B.34 Illustration of scenario for Merging of two vehicles . . . . . . . . . . . XXIX B.35 BN probabilities and IMM weights for the different models . . . . . . XXIX B.36 Probability of collision . . . . . . . . . . . . . . . . . . . . . . . . . . XXX B.37 Illustration of the intervention timing . . . . . . . . . . . . . . . . . . XXXI B.38 DTC and TTC dependent on time . . . . . . . . . . . . . . . . . . . XXXI C.1 Evaluation of each long-term path prediction part’s influence . . . . . XXXII C.2 Detection rate influence for each of the long-term path prediction partsXXXIII xvii 1 Introduction Development within the automotive industry has in recent years had significant progress in the area of active safety for passenger cars. Yet one challenging problem still needing attention is collision avoidance at intersections. For new intelligent safety systems focus on collision avoidance at intersections is therefore of impor- tance, since road intersections are among the most accident-prone and complex sec- tions within the road network. This compels Advanced Driver Assistance Systems (ADAS) the need to cope with highly complex road scenarios, such as urban inter- sections, where the hard-to-predict road-users and surrounding environment need to be accounted for. This thesis will propose a collision avoidance (CA) system with the ability to deal with these accident-prone urban intersections. 1.1 Background The automotive industry is an area that is in constant development, where Intelligent Transport Systems (ITS) is a key factor to increase safety and to tackle congestion and growing emission. With the use of ITS, transport can get safer, more efficient and more sustainable [1]. Today, there are several research projects within the automotive industry with focus on developing fully automated vehicles to be put on the roads in a near fu- ture. Extensive research has been conducted within Vehicle-to-Vehicle (V2V) [2][3] and Vehicle-to-Infrastructure (V2I) [4][5] communication, but as it will take several decades before the traffic network is fully utilised by automated vehicles, the need for a standalone automatisation solution is necessary. Even though substantial re- search has been conducted for standalone automatisation such as lane keeping [6], adaptive cruise control [7] and self-driving cars [8] as illustrated in Figure 1.1, coping with all parts of the complex road network has not yet been fully addressed. An especially accident-prone part of the complex traffic network is urban inter- sections as there are several different road-users co-existing in the same road section, all having multiple choices of direction. Road safety statistics for Europe [9][10][11] discloses that around 20% of the fatalities and around 43% of the overall accidents are at intersections, which in fact has been the trend for over a decade. The first problem encountered in intersection scenarios, both for humans as well as machines, is to determine the paths of the other road users. The path prediction is needed to evaluate if and where the paths of the vehicles will intersect. 1 1. Introduction (a) Lane keeping system from Toshiba [12] (b) Rear breaking system by Subaru [13] (c) City safety system from Volvo Cars [14] Figure 1.1: Research within active safety functions for the three different manufacturers Toshiba, Subaru and Volvo Cars. If the paths are evaluated to intersect at the same time step, then consequently a collision will occur. In theory this seems to be trivial, but in reality is the time span from a collision is detected to when the collision actually occurs often very short. A human is affected by both reaction and decision latency as well as distractions, which also adds a time delay when evaluating the collision risk. A machine on the other hand cannot get distracted, but can nevertheless have uncertainties in the sensor measurements. The safest approach would be to stop the vehicle as soon as there is a risk of collision, but for a machine to gain acceptance among drivers as a tool to avoid collisions, the system will need to give very few false interventions. If and when to initiate a braking intervention is therefore of importance. 1.2 Purpose In this thesis, the goal is to develop and demonstrate formal decision-making and threat-assessment algorithms with a particular attention to complex traffic intersec- tion scenarios. Given an unknown complex intersection, the objective is to provide a decision-making protocol preventing/mitigating a potential collision. Moreover should formal analysis methods, verifying safety requirements, be provided for novel intelligent control strategies. This Master thesis project will, in collaboration between DENSO Sales Sweden AB and Chalmers University of Technology, develop a Collision Avoidance (CA) system. The purpose is to develop a system dealing with a long-term path prediction for the included vehicles, a collision risk detection and finally a collision avoidance strategy as later illustrated in Figure 1.2. The proposed system shall be kept modular for the possibility to improve individual parts of the system in the future. 2 1. Introduction 1.3 General objectives In accordance with the background and purpose described in the previous sections, the general objectives for this thesis are stated as: • Proposing a method to merge pertinent information of the surrounding envi- ronment in order to accurately predict the behaviour of oncoming traffic. • Developing a stochastic solution evaluating the risk of collision using the pre- vious mentioned predictions. • Defining a formal, robust decision making procedure for least-invasive braking interventions. • Evaluating the proposed Collision Avoidance (CA) system with both simula- tions as well as with real-time implementation. 1.4 Contributions The contributions of this thesis are: • A combination of a Bayesian Network (BN) and an Interacting Multiple Model (IMM) filter together with an Unscented Kalman filter (UKF) to be able to compute long-term path predictions of vehicles. • A novel way of defining a collision, resulting in the ability to detect a big variety of possible collision configurations. The method is probabilistic, i.e. calculates the probability of collision. • A CA system being able to predict and avoid/mitigate several different colli- sion configurations in intersection scenarios. The content of this thesis also serves as a basis for a technical article which will be submitted in the nearby future. 1.5 System overview of the proposed solution A graphical overview of the approach of the presented system is illustrated in Figure 1.2. Here, a Long-term path prediction is executed first which henceforth progresses into a Collision detection assessment, and finally continues into a Collision avoidance procedure. A flow chart of the information distributed through the system is shown in Figure 1.3. The figure illustrates how map and sensor data serves as inputs to the Bayesian Network (BN), where the result from the BN is transferred, together with the sensor data, to the IMM. The combined result from the BN and IMM then provides a decision of probable vehicle motion to the Unscented Kalman Filter (UKF), which finally provides a trajectory propagation along the whole prediction horizon. All of these cooperative functions are referred to as the long-term path 3 1. Introduction prediction block. The predictions are thereafter sent to the collision detection block for evaluation of probable collision and finally sent to the collision avoidance block for an intervention decision. The respective blocks are described in Chapter 3, 4 and 5. Long-term path prediction Collision detection Collision avoidance Figure 1.2: Block scheme showing a graphical overview of the total system with the three different objectives Bayesian Network Map data Sensor data IMM Collision Detection Collision Avoidance Block 1: Long-term path prediction UKF Block 2: Collision detection Block 3: Collision avoidance Figure 1.3: Block scheme showing a flow of information throughout the total system with the three different objectives. 4 1. Introduction 1.6 Thesis organisation Chapter 2 provides the objectives, assumptions, defined models and other prereq- uisites necessary for the reader to get an underlying understanding of the system design. The focus of Chapter 3 lies at explaining the theory and design of how to track and predict the states of both the Perceived Other Vehicle (POV) as well as the EGO vehicle. Chapter 4 defines the risk of collision, evaluated based on the probabilistic predictions from Chapter 3. Chapter 5 are thereafter outlining the threat assessment, based on the probability of collision obtained in Chapter 4. In Chapter 6 are the results from the long-term path prediction, Chapter 3, collision detection, Chapter 4, and collision avoidance, Chapter 5, subsystems evalu- ated both separately, as well as combined into the complete CA system. A discussion of the obtained results and concluding remarks with possible future developments is presented in Chapter 7. 5 2 Preliminaries This chapter describes the prerequisites, defined problems and preliminary work prior to the research conducted in regard to the first three objectives presented in Section 1.3. Section 2.1 provides a problem description, further explaining the gen- eral objectives from Section 1.3, whereas the made assumptions will be outlined in Section 2.2. This chapter is also giving the vehicle and environment representa- tions, in Section 2.3 and 2.4 respectively. Finally is the intended evaluation setup described in Section 2.5. 2.1 Problem description As described in the general objectives, the overall problem to be solved is the detec- tion and avoidance (or at least the mitigation) of a collision. Moreover, the intention is to keep the number of false interventions few enough1. The stated problems are divided into three main areas, which as stated before are; Long-term path prediction, Collision detection and Collision avoidance. The problems to be addressed for the Long-term path prediction are: • How to fuse the information from map and sensor data to make long-term predictions for both the ego vehicle (EGO) and the Perceived Other Vehicle (POV)? • How to incorporate sensor imperfections? The Collision detection should consider: • How to detect a collision? • How to determine and define the probability of a collision? • How to define the characteristics of a collision (time to collision, duration of collision, distance to collision)? The Collision avoidance should finally answer: • How to decide whether an intervention is needed? • How to make an avoidance intervention (by performing an emergency braking) as late as possible and with very few false interventions? 1ISO26262 requires less than one false Autonomous Emergency Braking (AEB) intervention during 10 years or within 100 000 km of driving. This is though out of the scope of the thesis since extensive test hours will be needed to evaluate the system fully. 6 2. Preliminaries 2.2 Assumptions When solving the problems stated in Section 2.1, some assumptions should be re- garded to simplify and delimit the scope of the thesis. The following assumptions are therefore considered throughout the entire thesis report: • Only two vehicles are considered in each scenario, the EGO and one POV. The EGO is the vehicle with the Collision Avoidance (CA) system. • Perfect weather and road conditions, i.e. little or no slip is considered. • The vehicles are moving with a maximum velocity according to the speed limit in urban areas of 50 km/h. • Reliable digital map data is assumed to be given as input, i.e. no investigation of how map data is created will be concerned. • Only one lane in each driving direction will be considered, i.e. no lane iden- tification will be needed and collision with vehicles in the own driving lane is not considered. • The POV and EGO are assumed to be cars (no other road-users will be con- sidered). • No communication with surrounding objects (moving or stationary), as for example Vehicle-to-Vehicle (V2V) or Vehicle-to-Infrastructure (V2I) commu- nication, is considered in this work. • The sensors used for the EGO are assumed to give global state information. The positions in longitude and latitude as well as global heading. Velocity and angular velocity is assumed to be measured by the vehicles internal measure- ment unit (IMU) and wheel speed sensors. • The sensors used to observe the POV are assumed to give measurements rel- ative the EGO. This could be achieved for example by the use of a radar or a camera. 2.3 Vehicle model The mathematical representation of the EGO and the POV play a crucial role in the robustness and precision of the developed system. The state information of the EGO is easily obtained by on-board sensors such as GPS, IMU and wheel speed sensors. The POV on the contrary, will be observed by sensors mounted on the EGO such as, for example, camera or radar. The state information of the POV will thus never be completely accurate, due to imperfections introduced by the sensors such as for example measurement speed, measurement accuracy or reflection misinterpretation. 7 2. Preliminaries With the sensorial information at hand, the representation of the EGO and POV will be made with polar and spherical state space representation, respectively. The state vector configuration will therefore be: xEGO =  x y θ v w  and xPOV =  x y vx vy w  (2.1) where x and y are the global cartesian positions and w represents the angular veloc- ity, being the same notations for both the EGO and POV. The disparity between the two representations is that the EGO’s heading is denoted as θ and the speed as v, whereas the POV’s velocities, measured in x- and y-direction, will be denoted as vx and vy respectively. The spherical representation is used due to the intention of considering a radar as on-board sensor, which can only accurately measure positions and the radial com- ponents of the velocity, i.e the instantaneous change in range between the radar and the target. Heading and angular velocity can therefore not be observed for the POV. The angular velocity will not be measured, but is nevertheless in the state vector for the ability to include a constant turn rate to be used in a turn model. To model a vehicle in the most accurate way possible is a well known problem, for example is a review of the most common motion models described by [15]. Here the authors define models describing a constant motion such as with the Constant Veloc- ity/Acceleration (CV/CAcc) or the Constant Turn-rate (CT) model. More compre- hensive models are also presented, where some include several motions in the same model such as the Constant Turn Rate and Velocity/Acceleration (CTRV/CTRA) model or with the inclusion of topology knowledge through the Constant Curvature and Acceleration (CCA) model. These models all have pros and cons with respect to their ability to accurately model a vehicle, but where none really manages to be as exact as the other in their respective ”motion of expertise”. The conclusion is that each model describing a single motion (CV, CT or CAcc), is the best suited model to use in their respective motion. The problem thus occurs when trying to represent a vehicle that transitions between different motions. One solution is an Interacting Multiple Model (IMM), which uses a specific model when it is suited. This methodology will be described in Chapter 3, where the CV- and CT-models for both the EGO and POV denoted according to (2.1) is given as: CVP:  x y θ v w  x(t+ ∆T ) =  x y θ v 0  x(t) +  v · cos(θ) ·∆T v · sin(θ) ·∆T 0 0 0  x(∆T ) (2.2) 8 2. Preliminaries CTP:  x y θ v w  x(t+ ∆T ) =  x y θ v 0  x(t) +  v · cos(θ) ·∆T v · sin(θ) ·∆T w ·∆T 0 v/R  x(∆T ) (2.3) CV:  x y vx vy w  x(t+ ∆T ) =  x y vx vy 0  x(t) +  vx ·∆T vy ·∆T −vy · w ·∆T vx · w ·∆T 0  x(∆T ) (2.4) CT:  x y vx vy w  x(t+ ∆T ) =  x y 0 0 0  x(t) +  vx w · sin(w ·∆T )− vy w · (1− cos(w ·∆T )) vx w · (1− cos(w ·∆T )) + vy w · sin(w ·∆T ) vx · cos(w ·∆T )− vy · sin(w ·∆T ) vx · sin(w ·∆T ) + vy · cos(w ·∆T ) v/R  x(∆T ) (2.5) where ∆T denotes the time step between samples. CVP (Constant Velocity Polar) and CTP (Constant Turn-rate Polar) describes the EGO’s motion, where CVP will be the used model for straight path and CTP for turning. The CV model, used for straight path, and CT model, used for turning manoeuvre, describe the POV’s intended motion. 2.4 Environmental representation Since the representation of the EGO and POV should be made as accurately as possible, the environment representation plays a key part in the Collision Avoidance (CA) system’s ability to work as intended. Hence, since sensors collect data from different point-of-views, all the data need to be brought to the same reference frame. For example, a GPS gives information of longitudinal and lateral position at the earths surface (Geodetic frame) whereas mounted sensors such as a radar or a camera generates data relative the object its mounted on (Local frame). Since the observations of the EGO will be captured in the geodetic frame and the POV in the local frame, these will need to be transformed into a common coordinate frame. To also be able to benefit from map-based constraints and traffic rules, the states of the accounted vehicles will need to be mapped up to the global cartesian coordinate frame. 9 2. Preliminaries On-board sensors Geodetic (Longitude - Latitude) ECEF/Global frame (Earth-Centered, Earth-Fixed) Local frame DGPS/GPS + Digital Map Mapping of oncoming road-users/obstacles Map-matching through DGPS/GPS, digital map and sensor fusion of on-board sensors Figure 2.1: Block scheme showing how the different objectives will be regarded as seen to different coordinate systems This conversion process is illustrated in Figure 2.1, where the use of the three dif- ferent coordinate systems are described as: • Geodetic; which is a global coordinate system seen to the earths surface, e.g. positions in longitude and latitude. • Earth-Centered, Earth-Fixed (ECEF); which is a global cartesian coordinate system. • Local; is a local cartesian coordinate system, with origin on the EGO vehicle. The chosen strategy is to transform all the state information to the middle layer, ECEF, thus being able to combine all the data, with the additional possibility of locating the vehicles on a digital map. 10 2. Preliminaries 2.5 Evaluation setup The proposed algorithms should be integrated in simulation environments such as MATLAB/Simulink and PreScan [16]. Furthermore, experimental validation tests should be driven, in collaboration with DENSO’s technical staff, using fully equipped Volvo S60 demo vehicles. Verification of the system functionality is conducted through a variety of test scenar- ios, and is evaluated in both four-way intersections as well as T-intersections. The scenarios are including both non-evasive and evasive manoeuvres and are further explained in Chapter 6. For full evaluation of the system, different restrictions on the vehicles’ possible paths are set offline e.g. the possible driving directions in an intersection, which in reality would have been given by a digital map. 11 3 Long-term path prediction In a traffic situation where an imminent threat has been observed, the time before colliding is often very short. Hence within an occurrence of a collision, it is not suf- ficient to start braking when the observed threat has entered your designated path, instead the brakes need to be applied before it has entered. Being able to accurately predict the road participants future behaviour, to gain invaluable extra time to act, will increase a Collision Avoidance (CA) system’s performance immensely. The focus of this chapter is therefore to present a solution being able to evaluate information of the ego vehicle’s (EGO’s) surrounding environment to accurately predict future occurrences. The information comprises of road network and road user knowledge along the intended path, which could be collected by a variety of sensors and tools such as GPS, Inertial Measurement Unit (IMU), radar and digital map. This chapter thus contains a description of how support from map and sensor data can improve the ability to recognise a drivers intended manoeuvre, illustrated in Figure 3.1. The intended design of the long-term path prediction as a part of the full system is depicted in Figure 3.2. Here the propagated trajectories are calculated in parallel for both the EGO and the Perceived Other Vehicle (POV) at each time step. The subsystem contains three in series coupled parts taking care of data collection and validation, filtering and finally prediction. This chapter is divided into five sections. First, an introduction of the related work will be given in Section 3.1. Secondly, the method for incorporation of driver behaviour and map data to predict a future manoeuvre is explained in Section 3.2. Section 3.3 describes the implemented motion models with associated filtering method to represent a vehicle’s state. Thereafter is the method to perform long- term path predictions explained in Section 3.4. Finally, the combination of the three blocks leading to the complete long-term path prediction, is described in Section 3.5 3.1 Related work In the literature, several approaches are used for the purpose of positioning the EGO and POV as well as predicting their probable paths. The most common method used to estimate the current position and predicting the future position, is by the use of a Kalman filter, or its extensions such as the Extended Kalman Filter (EKF) or the Unscented Kalman Filter (UKF) as has been done in [17]. With the use of a Kalman filter comes the ability of adding state constraints, which as for example 12 3. Long-term path prediction has been made in [18]. Here, the authors use map data providing information of road restrictions to be used as constraints. With the use of such constraints, some approaches resulted in methods able to detect in which lane a vehicle is travelling. As for example in [19], where a Bayesian Network is used to identify the most probable driving lane by fusing map and sensor data. Bayesian Network Map data Sensor data IMM Collision Detection Collision Avoidance Block 1: Long-term path prediction UKF Figure 3.1: Overview of the Long-term path prediction subsystem as a part of the whole system, where the Bayesian Network (BN) is covered in Section 3.2, Interacting Multiple Models (IMM) in Section 3.3 and Unscented Kalman Filter(UKF) described in Section 3.4. The left part of the figure illustrates the intended procedure with predictions (shown as ellipses) for both vehicles included in the scenario. The approach of pinpointing a vehicles position on a map is denoted as map- matching, which has been extensively studied within the literature. Since the devel- opments in this area are so extensive, the map data could either be assumed to be fully reliable as in [19] or with a small uncertainty as in [20]. In [21] the positioning is achieved with an UKF assuming ”perfect” map knowledge, where the POV’s lo- cation on the map is made through Vehicle-to-Vehicle (V2V) communication. This is also the underlying assumption for the propossed design, hence will only consider reliable map data to be given as input, i.e. no investigation of how map data is created will be concerned. With the position of a vehicle considered to be reliably defined, focus could instead lie on looking at how to predict the future trajectory of the vehicle. An approach presented by [22], combines information from traffic rules, digital map and sensor data to result in a predicted trajectory for the considered vehicle. 13 3. Long-term path prediction BN IMM UKF BN IMM UKF EGO POV Collision Detection Collision Avoidance Choosing motion model that best describes the driver’s intent Choosing motion model that best describes the driver’s intent Figure 3.2: Overview of the connection between BN, IMM and UKF, which is running in parallel for the EGO and POV. Their respective probable paths are then compared and analysed with aim at predicting and preventing an imminent collision through the Collision Detection and Collision Avoidance blocks. This approach uses an EKF for vehicle positioning in accordance with the provided observations. Additionally an Object Oriented Bayesian Network (OOBN) is used to define probabilistic relations between the different information parameters for the ability to predict a probable future trajectory. In [23] the authors use a Bayesian Network (BN) to combine driver behaviour (i.e. turn signal) and the layout of a specific intersection in order to estimate drivers’ probable manoeuvres. A similar solution using Dempster Shafer’s theory is pre- sented in [24], where instead the different probabilities are set as hypotheses (stop before the intersection or taking a right, left or straight path) dependent on velocity changes. The hypotheses could thereafter be given altered probabilities depending on road network constraints. The profiles for the hypotheses were in this approach statistically created using real data. The filtering technique, such as in Kalman filters, usually requires a model to rep- resent the moving objects probable motion. A common approach incorporating a vehicles movement is by the use of a singular motion model such as the Constant Velocity (CV), Constant Turn rate (CT) or Constant Acceleration (CAcc) models. Tracking driver intentions is however complicated since vehicles do not exhibit one type of motion but rather tend to switch between a set of typical motions. An alternative way of including several motion models simultaneously (such as CT for turning and CV or CAcc for straight driving) and switching to the most suitable 14 3. Long-term path prediction model, could be done with the use of Interacting Multiple Model (IMM) theory as in [25][26][27]. The proposed solution presented in this chapter is inspired by [26], where they use a combination of BN hypothesis and an IMM-filtering technique for the possibility to estimate the EGO and POV’s probable path. This thesis’s proposed design will thus use both a BN and an IMM for validation of sensor and map data. The BN will observe the acquired data and compare these to predefined thresholds. If above these thresholds, an answer could be given of probable driver manoeuvre. The IMM will however use the data to tune the motion models representing the different probable behaviours. The BN and IMM will then collaborate to gain a robust decision of which motion model best describing the future probable manoeuvre. The decided model will thereafter be used in an UKF to make predictions some time steps into the future. Note that the CA system will only be implemented in the EGO, hence all calculations will be made in respect to the own driven vehicle. 3.2 Bayesian Network A Bayesian Network (BN) is a structured graphical representation of probabilistic relationships between different independent variables. A general description of BNs, with theory according to [28], is presented in the sequel, where as the BN applied in the proposed solution is presented in Section 3.2.2. 3.2.1 General description For the sake of clearness, the main ideas behind the BN will be explained throughout an example. Consider Figure 3.3, where the BN is composed of nodes (A to E) and interconnected with arrows to demonstrate the nodes respective dependencies. A node in a BN represents a random independent variable in the sense that they may be observable quantities, latent variables, unknown parameters or hypotheses. The arrows represent probabilistic links, also called edges, being conditional dependencies between two nodes. A description of the intermediate dependencies between all nodes in a network can be defined in a Conditional Probability Table (CPT). Nodes not being connected represent variables that are conditionally independent of each other. A parent is defined as the predecessor of a node, e.g. node A is the parent of node C in Figure 3.3. Each node is thus associated with a probability distribution, taking different values depending on the observations made by the considered node and connected nodes (child and parent nodes). The key components of a BN are therefore how the structure of the network as well as how the CPT are built up. For example, since the random variable E, in Figure 3.3, has parents B,C,D a CPT needs to be defined as P (E|B,C,D). If instead E would have had no parents (predecessor nodes), an Unconditional Proba- bility Table P (E) should be defined. 15 3. Long-term path prediction B E C A D Figure 3.3: Example of DAG (Directed Acyclic Graph) for a BN. A to E are called nodes and the arrows are called edges. The graph shows for example that A, B and D are independent and that A is a parent to C. Furthermore, to combine observations for different nodes, a well known equation in probability theory is used. The equation is called Bayes’ rule, defined in [28], which calculates a posterior probability distribution of a parameter V given an observation y. This is given as: P (V |y) = P (y|V )P (V ) P (y) (3.1) This rule can thus be used to update our beliefs regarding V , given the obtained information of the observation y. P (V ) denotes the prior probability distribution of V and P (y|V ) is the likelihood of y given the information of V . If V can take a number of m different values (V = [V1, V2, ..., Vm]), the normalisation constant P (y) can be calculated as: P (y) = m∑ j=1 P (y|Vj)P (Vj) (3.2) On the contrary, if there exist multiple observations y1, ..., yn being conditionally in- dependent, the joint likelihood distribution of all the observations will be the product of each individual observation. This results in a posterior probability distribution based on Bayes’ rule, called Naive Bayes, which is defined as: P (V |y1, ..., yn) = cnormP (V ) n∏ i=1 P (yi|V ) (3.3) where cnorm is a normalisation constant given as cnorm = 1/P (y1, ..., yn), and calcu- lated with the use of (3.2) according to: 1 P (y1, ..., yn) = m∑ j=1 P (Vj) n∏ i=1 P (yi|Vj) (3.4) 3.2.2 Application A BN can be used as a tool to process and evaluate information gained from a digital map accompanied with sensor data. The fusion of sensor and map data can infer a drivers behaviour, hence the BN’s purpose in the proposed design is to 16 3. Long-term path prediction evaluate surrounding information in the same manner as a human driver. Similar to if you would drive a vehicle, you would regard information such as lane alignment of oncoming vehicles, if they decrease in speed or have the turn signal activated. Additional information could be how you interpret the road network to be connected. Each of these perceptions could infer a probable behaviour of the oncoming vehicle. The BN could in the same fashion evaluate this information to predict an EGO’s probable behaviour. The desired result is to verify if the driver’s intended trajectory leads to a collision, meanwhile performing the avoidance intervention if concluded to be necessary. Figure 3.4 shows a graphical representation of the proposed BN as a Directed Acyclic Graph (DAG). The BN is built up by a number of nodes, as seen in the figure, namely Road restrictions, Angular velocity, Turn signal, Acceleration changes, Road markings/Traffic rules, Lateral alignment and Probable manoeuvre. Static profiles for acceleration, angular velocity and lane alignment have been created by the use of real data, to continuously compare with acquired observations such as map and sensor data. The profiles are built up by statistical information and will thus work in a similar fashion as a lookup-table. The node Probable manoeuvre will finally gather the parent nodes’ resulting probabilities and compose these into the probability of left PLeft, right PRight and straight PStraight driving direction for each vehicle separately. Road Markings/ Traffic rules Road restrictions Acceleration changes Turn signalLateral alignment Colour coding: Basic road rules Driver affected Predicted manoeuvre Probable manoeuvre [PLeft , PRight , PStraight] Angular velocity Figure 3.4: Directed Acyclic Graph (DAG) for the BN depicting the different nodes. When new observations are made for each node a new posterior probability will be cal- culated in node Probable manoeuvre, which is the output of the BN. The Probable ma- noeuvre will give the probability of left PLeft, right PRight and straight PStraight . The network’s different nodes and associated profiles are described in Section A to F, where the last Section G describes the final node Probable manoeuvre giving the probability of left, right and straight manoeuvre respectively. The statistical profiles for the nodes Acceleration changes, Lateral alignment and Angular velocity, in Section C, D and F, were created from driving data in an intersection at Bäckebol, Gothenburg seen in Figure 3.5 and 3.6. The statistical profiles were thus created from the behaviour at intersections when performing either a straight, left or right driving manoeuvre. 17 3. Long-term path prediction Figure 3.5: Global driving paths for the vehicle when data was collected for creation of the profiles. The distances are measured in meters, with the origin placed in the cen- ter of the intersection. North is up and south is down in the figure. The intersec- tion was driven from north to south in the sense that the driven vehicle went from the upper to lower part of the figure. Figure 3.6: Global driving paths for the vehicle when data was collected for creation of the profiles. The distances are measured in meters, with the origin placed in the cen- ter of the intersection. North is up and south is down in the figure. The intersec- tion was driven from south to north in the sense that the driven vehicle went from the lower to upper part of the figure. A Road restrictions The node Road restrictions describes how the roads, in for example an intersection, are linked together. Based on the possible connections for the road the vehicle is travelling on, probabilities of possible manoeuvres will be given. Observe that this node is to be distinguished from Road markings/traffic rules since it only depends on the interconnection between the roads, without any consideration of traffic rules. The information for the road restrictions is collected from a digital map. In this work, it will give the probability of a left, right and straight path in an intersection, and can be described by the likelihood of the Road restrictions RR given the manoeuvre M as: P (RR|M) = froadRestrict(roadL(k), roadR(k), roadS(k)) (3.5) where froadRestrict is a look-up table dependent on the three boolean variables in time step k roadL, roadR and roadS which are set true if there exists a road to the left, right and straight respectively. B Road markings/traffic rules The difference between Road restrictions and Road markings/traffic rules is, that this node evaluates information about possible legal driving directions. The infor- mation for the Road markings/traffic rules can similar to the observation of road restrictions be collected from a digital map, or by observation of markings in the road, such as painted arrows. With respect to the predefined traffic rules for the 18 3. Long-term path prediction interconnected roads, the probability of going left, right and straight will be given as the likelihood of the Road markings, RM , given the manoeuvre M , according to: P (RM |M) = ftrafficRules(LinkL(k), LinkR(k), LinkS(k)) (3.6) where LinkL, LinkR and LinkS are boolean variables in time step k set to true if a legal manoeuvre can be made in the direction of left, right and straight respectively. These variables serves as inputs for the function ftrafficRules, where the function works as a look-up table. C Acceleration changes The node regarding the acceleration changes has a dependency on stop lines. If there is a stop line, a vehicle is restricted to be stopping in either case, hence the acceleration behaviour cannot be included in the overall evaluation. If braking occurs and there is a stop line, consequently left, right and straight manoeuvre will be set with equal probability. If there is no stop line and the vehicle is at a certain distance to the intersection, the acceleration is collected and compared to the predefined profile seen in Figure 3.7 to determine the probability of a left, right and straight driving direction. The profile was created from collected data for normal driving where turning or straight driving manoeuvres were performed. As seen in Figure 3.7, the profile has three different level curves; -0.3, -0.5 and -0.7 [m/s2], to be used in the absence of stop line. Furthermore, the reason for setting the first level at -0.3 [m/s2] can also be observed in the figure. The data for the straight manoeuvre can be seen to always be above the -0.3 [m/s2] limit, hence this is the limit which distinguishes a turn from a straight manoeuvre. The other limits are used to increase the certainty that a turning manoeuvre will be performed, i.e. the higher deceleration, the higher the probability will be of a future turn to be made. The likelihood of the Acceleration Changes, AC, given the manoeuvre M can be described by: P (AC|M) = faccChange(stopline(k), acc(k)) (3.7) where faccChange serves as a the look-up table with the boolean input stopline, which is true if a stopline exists, and the input describing the acceleration acc, both in time step k. D Lateral alignment This node evaluates information of vehicles’ alignment in the lane to estimate the probabilities of a straight, left and right manoeuvre. If the vehicle is located more on the left side of the lane, an indication of a probable turning manoeuvre to the left can be given. Likewise, if the vehicle is located more on the right side, a probable turning manoeuvre to the right can be indicated. If instead the vehicle aligns itself with the centerline, this provides information of a probable straight manoeuvre. The node’s resulting probabilities will never be set high for any of the manoeuvres since this evidence is considered to be uncertain. For instance, drivers handle their 19 3. Long-term path prediction Figure 3.7: Collected real data used for creation of the acceleration profile for the accel- eration evidence in the Bayesian Network. The profile levels can be seen as the black lines and are placed on -0.3, -0.5 and -0.7 [m/s2]. Blue colour represents the driving behaviour when turning left, green when going straight and red representing right turn. vehicles differently, because of for example how drivers have been taught to drive or depending on vehicle properties. Some drivers align their vehicles to make a big turn hence acts in the opposite way of what was described previously. The node can therefore be used to detect if a turn will be made, but not in which direction it will be. The node will thus monitor how much the vehicle addresses either side of the road, by looking at the ratio between the distances from the vehicle to each side of the lane. The data used to create the profile, for the ability of defining the probabilities of each manoeuvre, can be seen in Figure 3.8. The different test scenarios, used when collecting the data, can be compared and seen in Figure 3.5 and 3.6. The result from the collected data shows that if the vehicle is detected to lean more than 50 [cm] towards either side of the own lane, a high probability will be given to the two turning manoeuvres. Otherwise a higher probability will be given to the straight manoeuvre. These probabilities are described by the function flateralAlign, which is used to compute the likelihood of the lateral alignment LA, given the manoeuvre M : P (LA|M) = flateralAlign(LeftDelim(k), RightDelim(k)) (3.8) where LeftDelim and RightDelim are the distances to the lane markers on the left and the right side of the vehicle respectively, in time step k. E Turn signal The node regarding the turn signal is observing if, and in which direction, the turn signal is activated. If a turn signal is activated the probability of a turn is set high. 20 3. Long-term path prediction Figure 3.8: Profile for the lateral alignment node in the Bayesian Network. If a left turn signal is activated, then it is highly probable that a turn to the left will be made and vice versa for a right turn signal. If no turn signal is activated, the probability for a straight manoeuvre is higher, but not equally high as for left/right when a turn signal is on, since the driver could have forgotten to activate the turn signal. Likewise for the left/right turn, the indicator could have been set on by mistake or been set in the opposite direction from the intended, hence could not be fully reliable. Either way, as there in most countries 1 exist laws for the use of turn signals, this is considered to be a strong evidence. The likelihood for Turn signal, TS, given the manoeuvres M , can be calculated as: P (TS|M) = fturnSignal(turnsignal(k)) (3.9) where fturnSignal referres to the look-up table used for the input turnsignal(k), set to the activated turn signal (left, right, none), in time step k. F Angular velocity Since the Lateral alignment node has uncertainty in the alignment, the Angular ve- locity node observes the turn rate as a complement to give a more certain indication of which direction the vehicle will travel. By analysing collected data, seen in Figure 3.9, two thresholds were found and set to ±10 [deg/s]. If the observed data is above the threshold of 10 [deg/s] the probability of right turn will be high. Likewise, if the data is below -10 [deg/s], the probability of left turn will be high. Otherwise a high probability is set for the straight path. The likelihood of the Angular velocity AV given the manoeuvre M is given by: P (AV |M) = fyaw(yawRate(k)) (3.10) 1In Sweden regulated in ”Trafikförordning (1998:1276)” 21 3. Long-term path prediction where fyaw is a look-up table, with the angular velocity, yawRate, in time step k as input. Figure 3.9: Profile for the angular velocity node in the Bayesian Network. G Probable manoeuvre The Probable manoeuvre node gives the final probability for each manoeuvre based on the probabilities given from the parent nodes, previously described. The resulting probability distribution can be described as: P (M |e1, ..., e6) = cnormP (M) 6∏ i=1 P (ei|M) (3.11) similar to (3.3), where cnorm is a normalisation constant given as cnorm = 1/P (e1, ..., e6) and calculated according to (3.4) as: 1 P (e1, ..., e6) = 6∑ j=1 P (Mj) n∏ i=1 P (ei|Mj) (3.12) Here, M represents the probability distribution of the driving manoeuvres [L,R, S], where L is left, R is Right and S is Straight. The different observations ei are given by e = {RR,AC,RM,AV, LA, TS}, which corresponds to the parent nodes RR (Road Restrictions), AC (Acceleration Change), RM (Road Markings/Traffic rules), AV (Angular velocity), LA (Lateral alignment) and TS (Turn signal). The probabilities are thereafter calculated separately as: PLeft = P (L|RR,AC,RM,AV, LA, TS) (3.13) PRight = P (R|RR,AC,RM,AV, LA, TS) (3.14) PStraight = P (S|RR,AC,RM,AV, LA, TS) (3.15) 22 3. Long-term path prediction where PLeft, PRight and PStraight are the probabilities for left, right and straight manoeuvre respectively. Observe that the prior probability P (M), stated in (3.11), is set equal for all three directions, i.e. 1/3 on each of them. Recall that the likelihoods (P (ei|M)) were described in (3.5) to (3.10). The result of the Probable manoeuvre node will act as complementary weights, via a Transition matrix, to the filtering part of the Long-term path prediction subsystem. This matrix is used in the Interacting Multiple Model (IMM) filter to provide the probabilities of transitioning from one model to another. The Transition matrix is therefore formed, based on (3.13), (3.14) and (3.15), as: Π =  ΠLeft|Left ΠRight|Left ΠStraight|Left ΠLeft|Right ΠRight|Right ΠStraight|Right ΠLeft|Straight ΠRight|Straight ΠStraight|Straight  = PLeft PRight PStraight PLeft PRight PStraight PLeft PRight PStraight  (3.16) where each row corresponds to the currently used model (Left, Right, Straight) as well as the probability of either staying or transitioning to another model in the next time step. For example, if the current model is left, the matrix gives the probability of either transitioning to the straight or right model or of staying in the left model. 3.2.3 Proposed Bayesian Network algorithm The input to the BN is the acquired digital map data as well as the state information of the EGO and a possible POV, collected via sensor data. The output from the subsystem is the transition matrix, Π, giving the probability associated with left, right and straight driving direction. The BN for both the EGO and POV will be executed in parallel, resulting in one transition matrix for the EGO and one for the POV. Algorithm 1 Bayesian Network Input: Map data, Sensor data Output: Π Each of the functions are called with associated observations as input Collect P (RR|M) according to (3.5) Collect P (RM |M) according to (3.6) Collect P (AC|M) according to (3.7) Collect P (LA|M) according to (3.8) Collect P (TS|M) according to (3.9) Collect P (AV |M) according to (3.10) Compute PLeft according to (3.13) with use of (3.11) Compute PRight according to (3.14) with use of (3.11) Compute PStraight according to (3.15) with use of (3.11) return Π . Formed using Equation 3.16 23 3. Long-term path prediction 3.3 Interacting Multiple Model (IMM) filter As Chapter 1.2 and 2.1 mentioned, one of the stated problems is how to model the EGO and POV as accurate as possible. As previously explained, tracking driver intentions is however complicated by the fact that vehicles do not exhibit one type of motion but rather tend to transition between a set of typical motions. A way of including several motion models simultaneously and switching to the model best describing the intended driver behaviour, can be done with the use of an Interacting Multiple Model (IMM) filter. The purpose of using an IMM is the possibility to include two or more filters, each associated with different motion models for target tracking. Filtering can be done with any desired filter, such as for example an Extended Kalman Filter (EKF) or Unscented Kalman Filter (UKF). The IMM will moreover form a weighted sum of each filter’s output, hence being able to rapidly adjust to the observed vehicles probable manoeuvre. Computational complexity can become a problem since there will be as many filters running in parallel as incorporated motion models. The EKF is therefore chosen in the proposed design due to being an admitted and computa- tional light filtering technique. In an IMM approach, the system state is described by a certain motion model de- noted as mo. The estimated true model is thereafter determined from a finite and predefined set of alternative models. If a change of manoeuvre has been made by the observed driver, a transition to another model best describing the new vehicle dynamics is required. The motion model that best matches the observed behaviour, is therefore said to be changeable between each consecutive time step k. To be able to accurately predict which motion model best representing the probable next manoeuvre, each model and associated filter will need to be contin- uously updated. The state and measurement estimate is therefore computed for each of the N number of models. The mathematical formulations in this section have notations according to [29], with the linear state x(k) and measurement ẑ(k) representation given as: x(k) = fmo(k − 1,x(k − 1), q(k − 1)) ẑ(k) = hmo(k − 1,x(k), ε(k)) (3.17) where fmo and hmo are dependent on the corresponding model. Here, q(k-1) and ε(k) represent the process and measurement noise respectively. The reason why the state and measurement representation should be on the same form, is due to the IMM-algorithms requirement of being computed the same way irrespectively of motion model. These requirements are reflected in the state space representation of (3.17) as: x(k) = Amox(k − 1) +Bmoq(k − 1) ẑ(k) = Cmox(k) +Dmoε(k) (3.18) where Amo, Bmo, Cmo and Dmo are defined matrices in accordance with the associ- ated motion model, as defined in Section 2.3. These matrices are restricted by the 24 3. Long-term path prediction requirement that they need to be of the same size irrespectively of which model in use. A summary of the IMM approach is thus that the main purpose is to estimate the true model from a finite and predefined set of motion models. This true model is assumed to change in time, hence both previous state information and future probability need to be considered in the evaluation. A cycle of an IMM-algorithm can be computed in four steps, illustrated in Figure 3.10, which comprises of: 1. Calculation of Mixing probability update and Mixing, which integrates the pre- vious state with the probable future behaviour given by both the Bayesian Network (BN) and the result from the previous iteration time step, hence mixing the provided information. This is described in Section 3.3.1. 2. Filtering, delivering the current state for each motion model, further described in Section 3.3.2. 3. Calculation of Model probability update, hence delivers the updated belief of probable manoeuvre. This is described more thoroughly in Section 3.3.3. 4. Output estimate calculation, giving the weighted output of the different models with probability of next manoeuvre incorporated into the resulting state and covariance. The final step is given in Section 3.3.4. Section 3.3.5 describes the intended application of the IMM with the associated pseudo-code given in Section 3.3.6. Figure 3.10: Overview of a cycle in the IMM-filter comprised of four computation steps. 25 3. Long-term path prediction 3.3.1 Mixing probability update and mixing The mixing step of the IMM-filter has the purpose of integrating/mixing knowledge of state and covariance from previous iteration, accompanied with the probability of future driver manoeuvre. The mixing probability update µ̂(k − 1) is defined in regard to the probable future manoeuvre predicted by both the Bayesian Network, through the Transition matrix Π(k) given in (3.16), and model probability given from previous iteration µ̄(k − 1). The output from the mixing step will be an update of the previously estimated state and covariance, working as an input to the filtering step. The updated mixing probability for each model µ̂mo(k − 1) can be calculated as: µ̂mo(k − 1) = Πmo · µ̄(k − 1)∑N r=1 Πr · µ̄(k − 1) , for mo = 1, ..., N (3.19) where Πmo (and similarly Πr in the summation) denotes each model’s probability for left, right and straight in the matrix described in (3.16). The mixed initial state estimate for each model, denoted as x̂mo(k − 1|k − 1), can thereafter be updated with the combined knowledge of each model’s previous state x̄mo(k − 1|k − 1) and mixing probability µ̂mo(k − 1) as: x̂mo(k − 1|k − 1) = N∑ r=1 x̄r(k − 1|k − 1) · µ̂mo(k − 1), for mo = 1, ..., N (3.20) and correspondingly can the initial estimated covariance for each model P̂mo(k−1|k− 1) be updated, using each model’s previous covariance P̄mo(k − 1|k − 1) together with the difference between previous and updated state estimate, as: P̂mo(k − 1|k − 1) = N∑ r=1 µ̂mo(k − 1) · { P̄mo(k − 1|k − 1) + [x̄r(k − 1|k − 1)− x̂mo(k − 1|k − 1)]· [x̄r(k − 1|k − 1)− x̂mo(k − 1|k − 1)]T } , for mo = 1, ..., N (3.21) 3.3.2 Filtering The initial state estimate x̂(k − 1|k − 1) and corresponding initial covariance P̂ (k − 1|k − 1), acquired from Section 3.3.1, are thereafter used for a prediction through the predefined filters, described in Subsection A, for each model. The state is thereafter compared with an observed measurement z(k) in Subsection B, for the ability to make an accurate prediction and update. The result will be an update of the initial state estimate, denoted as x̂mo(k|k) and covariance P̂mo(k|k) working as inputs in both the Model probability update and Output estimate calculation, as well as being used in the next iteration step of the IMM-filter. 26 3. Long-term path prediction A Prediction update The predicted state x̂mo(k|k−1) and covariance P̂mo(k|k−1) are calculated for each model according to: x̂mo(k|k − 1) = Amo · x̂mo(k − 1|k − 1) (3.22) P̂mo(k|k − 1) = Amo · P̂mo(k − 1|k − 1) · ATmo +Bmo ·Qmo ·BT mo (3.23) where Qmo is the process noise covariance matrix and Amo together with Bmo are the defined matrices acquired from Section 2.3, for each motion model. B Measurement update The state x̂mo(k|k) and covariance P̂mo(k|k) are computed using the observed mea- surement z(k), predicted state x̂mo(k|k − 1) and covariance P̂mo(k|k − 1). This is calculated as: ẑmo(k) = Cmo · x̂mo(k|k − 1) (3.24) Smo(k) = Cmo · Pmo(k|k − 1) · CT mo +Dmo ·R ·DT mo (3.25) Kmo(k) = P̂mo(k|k − 1) · CT mo · Smo(k)−1 (3.26) x̂mo(k|k) = x̂mo(k|k − 1) +Kmo(k) · (z(k)− ẑmo(k)) (3.27) P̂mo(k|k) = P̂mo(k|k − 1)−Kmo(k) · Smo(k) ·Kmo(k)T (3.28) where ẑmo(k) denotes the predicted measurement, Smo(k) the innovation covari- ance matrix and Kmo(k) the Kalman gain for each motion model. Cmo and Dmo are defined matrices according to Section 2.3. Notice that the measurement noise covariance matrix R is defined irrespectively of which motion model in use. 3.3.3 Model probability update After the filtering has been conducted, the probability of using each motion model needs to be updated in accordance with the newly estimated covariance P̂mo(k|k) and innovation covariance Smo(k) matrix. The updated model probability µ̂mo(k), also denoted as the weights, will thereafter be passed to the Output estimate calcu- lation step as well as being used in the next iteration step of the IMM-filter. With the state and covariance as well as the observed measurement z(k) and pre- dicted measurement ẑmo(k), the likelihood of probable future motion model can be expressed as: Λmo(k) = 1√ |2πP̂mo(k|k)| exp ( −0.5 (z(k)− ẑmo(k))T (Smo(k))−1 (z(k)− ẑmo(k)) ) (3.29) The model probabilities µ̂mo(k) will thereafter be updated to be used in the next iteration, using the likelihood Λmo(k), according to: µ̂mo(k) = Λmo(k) · c̄mo∑N r=1 Λr(k) · c̄r , for mo = 1, ..., N (3.30) 27 3. Long-term path prediction where the normalisation constant c̄mo is calculated, in accordance with (3.19), as: c̄mo = N∑ r=1 Πr · µ̂mo(k − 1), for mo = 1, ..., N (3.31) 3.3.4 Output estimate calculation The resulting output estimated state x̌(k|k) and covariance P̌ (k|k) are finally com- puted using the weighted sum which incorporates the updated model probability µi(k) as well as state x̂mo(k|k) and covariance P̂mo(k|k) for each of the correspond- ing models. This is given as: x̌(k|k) = N∑ r=1 x̂r(k|k) · µ̂r(k) (3.32) P̌ (k|k) = N∑ r=1 µ̂r(k) · ( P̂r(k|k) + (x̂r(k|k)− x̌(k|k)) · (x̂r(k|k)− x̌(k|k))T ) (3.33) the variables used for the next iteration is finally assigned as: x̄mo(k) = x̌(k) P̄mo(k) = P̌ (k) µ̄(k) = µ̂(k) 3.3.5 Application As presented in Section 2.3, three different motion models are incorporated in the proposed solution describing left, right and straight manoeuvre. The estimated state and covariance will thus be updated at every time step for each motion model separately using the IMM-filter. These estimates are thereafter combined to result in a weighted sum to be used in the output state and covariance computation. Instead of only relying on the manoeuvre probability computed by the Bayesian Network (via the Transition matrix), the manoeuvre probability from the IMM cal- culated at the current time step will also be accounted for. This choice is motivated by the fact that the two subsystems give results with respect to different time as- pects, where the IMM delivers near-term predictions while the BN makes long-term predictions. The intended application therefore uses a combined manoeuvre prob- ability in the decision process of which motion model best representing the future manoeuvre. Since the mixing step considers the IMM’s manoeuvre probability at the previous iteration step, this needs to be initially defined as: µ̂(0) = [ 0.05 0.05 0.9 ] (3.34) where µ̄(0) is represented by the probabilities for left, right and straight model. The manoeuvre for straight path is initially set to have the highest likelihood (90% certainty), due to that for analysis of data it is assumed that a vehicle’s initial pathway has a straight direction. 28 3. Long-term path prediction 3.3.6 Proposed IMM algorithm The proposed IMM-algorithm will continuously compute the filtered state and co- variance for the EGO as long as a detected movement has been monitored. If a detection of a POV occurs, simultaneous filtering computations will be conducted using the same IMM-algorithm. To be noted is that the initial covariance matrix P (1) is a predefined matrix with values given in Appendix A.4. The filtered estimates of each accounted vehicle will thereafter be sent to the final part of the long-term path prediction block, the prediction part by the use of UKF. Algorithm 2 Interacting Multiple Models filter Input: Π(k), Sensor data z(k), Model specifications fmo, Qmo Output: x̌(k), P̌ (k) Initialise: x̄mo(0) = z(1), P̄mo(0) = P (1) for mo = 1 : N do Perform integration of previous estimations using (3.19) to (3.21) [x̂mo(k − 1), P̂mo(k − 1)] = Mixing ( x̄mo(k − 1), P̄mo(k − 1),Π(k), µ̄(k − 1) ) Perform filtering using the defined model fmo according to (3.23) to (3.28) [x̂mo(k), P̂mo(k)] = Filtering ( x̂mo(k − 1), P̂mo(k − 1), z(k), fmo, Qmo, R ) end for Update the model probability using (3.30) [µ̂(k)] = ModelProbabilityUpdate ( P̂ (k), z(k), µ̂(k − 1) ) Compute the output estimates using (3.33) [x̌(k), P̌ (k)] = OutputEstimate ( x̂mo(k), P̂mo(k), µ̂(k) ) Assign the next iteration variables x̄mo(k) = x̌(k), P̄mo(k) = P̌ (k), µ̄(k) = µ̂(k) return x̌(k), P̌ (k) 29 3. Long-term path prediction 3.4 Trajectory prediction using UKF The Unscented Kalman Filter (UKF)-algorithm is comprised of two parts. The first part computes a prediction of the states with corresponding covariances for the chosen motion model, where in the second part these predictions are updated through comparison with received measurements. Thus if the aim is to predict a future trajectory, only the first part of the UKF will be used. The main advantage with an UKF is the possibility of incorporating the fil- tering technique irrespectively of the system functions linearity. Another purpose of using an UKF for trajectory prediction is the possibility of having Gaussian distri- butions to represent driver uncertainty. The UKF is using a sampling technique called the Unscented Transform which selects a minimal set of sample points, denoted as sigma points, around the state mean. The sigma points χ, are thereafter propagated through the linear or nonlinear function to accurately capture the mean and covariance of the estimates at each time step. The result is a filter that subsequently can be used to calculate a new estimated mean and covariance. 30 3. Long-term path prediction For a state model with added noise, as described in (3.17), the prediction part of the UKF forms the Gaussian approximation (with the underlying equations from [30]) according to step 1 and 2 below. This is computed for the whole prediction horizon i = 1,...,npredictions. 1. A set of 2n+1 sigma points are derived for the augmented state mean x̃ and covariance P̃ , with n being the state dimension, as: χ0 = x̃(i) χc = x̃(i) + √ n+ λ · √ P̃ c(i) χc+n = x̃(i)− √ n+ λ · √ P̃ c(i) for c = 1,..,n (3.35) where c denotes the column of a matrix. The scaling parameter λ is given as: λ = α2(n+ κ)− n (3.36) where x̃ and P̃ is given from the previous iterations predicted state and covari- ance, which at the first prediction step will be initiated with the IMM-filter output (x̌ and P̌ ). The constants α and κ determines the spread of the sigma points around the mean (how large uncertainty you have). 2. Compute the predicted moments by propagation of the sigma points through the function: x̃(i+ 1) = 2n∑ s=0 f̄mo(χs) ·W x̃ s (3.37) P̃(i+ 1) = 2n∑ s=0 (f̄mo(χs)− x̃(i))(f̄mo(χs)− x̃(i))T ·W P̃ s + Q̃mo(i) (3.38) where f̄mo is the function and Q̃mo is the process noise covariance matrix for the chosen motion model described in Section 2.3. The weights W , are calculated as: W x̃ 0 = λ n+ λ (3.39) W P̃ 0 = λ n+ λ + (1− α2 + β) (3.40) W x̃ Γ = W P̃ Γ = 1 2(n+ λ) , for Γ = 1, ..., 2n (3.41) with β as a parameter to use for incorporating prior information of the input state’s distribution. 31 3. Long-term path prediction 3.4.1 Application The proposed long-term prediction design follows the conventional filtering algo- rithm of the UKF, hence resulting in an estimated state and covariance at each propagated time step. The propagation will be made along a predefined prediction horizon using the chosen model that describes the most probable vehicle trajectory. An optional part is added to the subsystem being able to monitor the distance to intersection if this is given as input data. The long-term path predictions could thus benefit from the result given by the Bayesian Network. For instance, it could rely on which motion model to use for the predictions once being in the intersection, making for example better predictions of a turn even before the vehicle enters the curve. If the distance to intersection can not be acquired, the chosen prediction model will solely be the result from the IMM. 3.4.2 Proposed trajectory prediction algorithm The proposed trajectory prediction algorithm follows the mathematical formulations stated in (3.35) to (3.41). The resulting output from the prediction algorithm will be a state vector and covariance matrix describing the whole prediction horizon for each considered vehicle. The calculated state and covariance will be subject to the motion model chosen for the predictions and will be denoted x̃EGO and P̃EGO for the EGO as well as x̃P OV and P̃P OV for the POV. Algorithm 3 Trajectory prediction using UKF Input: x̌(k), P̌ (k), f̄mo Output: x̃, P̃ Initialise: x̃(0) = x̌(k), P̃ (0) = P̌ (k) for i = 1 : npredictions do Derive a set of 2n+1 sigma points by the use of (3.35) and (3.36) χ = FormSigmaPoints ( x̃(i− 1), P̃ (i− 1), n, α, κ ) Compute state and covariance prediction using (3.37) to (3.41) [x̃(i), P̃ (i)] = ComputePredictions ( χ, f̄mo, x̃(i− 1), P̃ (i− 1), Q̃mo ) end for return x̃ = [x̃(0), ..., x̃(npredictions)], P̃ = [P̃ (0), ..., P̃ (npredictions)] 3.5 Long-Term Path Prediction Procedure As the introduction in this chapter described, the Long-term path prediction sub- system comprises of the three parts Bayesian Network (BN), Interacting Multiple Model (IMM) filter and Unscented Kalman Filter (UKF). These have been defined in Section 3.2, 3.3 and 3.2 respectively, and will in this last section be tied together to describe the overall procedure of the first subsystem, as illustrated in Figure 3.11. 32 3. Long-term path prediction Figure 3.11: Overview of the Long-term path prediction subsystem’s flow of input and output signals from the different parts. As illustrated in figure 3.11, the BN sends the Transition matrix Π to the IMM. Both the BN and IMM is thereafter sending the state and covariance for the chosen motion model giving the highest probability as output, denoted as fmo,BN and fmo,IMM respectively. The IMM’s choice of motion model is however partially affected by the input Transition matrix. Note that both the BN as well as the IMM will have the measurement data as input. Both the BN’s and IMM’s output will therafter be sent to an evaluation algo- rithm determining which motion model being best suited for the prediction horizon. If a distance to the intersection is known, the decided motion model to use in the UKF part will vary along the prediction horizon. Namely, the model given by the IMM will be used for the predictions up to the intersection and the BN’s chosen model will be used for the prediction horizon in and through the intersection. How- ever, if the distance to intersection is unobtainable, then only the IMM’s computed best fit motion model will be used. This evaluation algorithm is in the presented solution integrated into the UKF algorithm, but for the sake of clarity it is described as a stand-alone algorithm. As the last part of the subsystem, will the UKF use the input motion model properties to propagate the state and covariance along the predefined prediction horizon. The state and covariance for both observed vehicles will thereafter be sent to the Collision detection subsystem for evaluation of a probable collision. 33 3. Long-term path prediction 3.5.1 Long-term Path Prediction Algorithm The algorithm described in this section is comprised of the algorithms stated in Sec- tion 3.2.3, 3.3.6 and 3.4.2. As mentioned before, the evaluation algorithm have been integrated into the UKF, hence will not be a separate procedure as was illustrated in Figure 3.11. Algorithm 4 Long-term path prediction algorithm Input: Map data, Sensor data Output: x̃, P̃ Make a long-term prediction using the BN: [Π(k), fmo,BN(k)] = BN(Map data, Sensor data) Make a near-term prediction using the IMM: [x̌(k), P̌(k)] = IMM(Π(k), Sensor data z(k)) Propagate the state and covariance using the chosen motion model along the predefined prediction horizon: [x̃, P̃ ] = UKF(fmo,BN , x̌(k), P̌(k)) return x̃ = [x̃(0), ..., x̃(npredictions)], P̃ = [P̃ (0), ..., P̃ (npredictions)] 34 4 Collision detection This Chapter describes how to detect a collision, based on the predictions provided by the Unscented Kalman Filter (UKF) from Chapter 3, as well as how to charac- terise a collision. Figure 4.1 provides an overview of the collision detection block as a part of the full system, marked in blue. Furthermore, the connections with the other subsystems are also illustrated, where primarily a direct link from the UKF and another link to the Collision Avoidance (CA) subsystem can be seen. This chapter is organised as follows. First an introduction to collision detec- tion is given with related work described in Section 4.1. Section 4.2 explains how a collision is considered and defined in the scope of this work. Section 4.3 and 4.4 describes a deterministic and probabilistic collision detection algorithm respectively. Finally, the connection and inter-dependencies between the collision detection sub- system and the overall system is outlined in Section 4.5. Bayesian Network Map data Sensor data IMM Collision Detection Collision Avoidance UKF Block 2: Collision detection Figure 4.1: Overview of the full system with the collision detection subsystem’s position marked in blue. 35 4. Collision detection 4.1 Related Work There are several different methods in the literature to define a collision. One method presented in [31], is based on the intersection of two vehicles’ respective areas. Firstly, the authors establish a pessimistic approximation of possible inter- sections between the ego vehicle (EGO) and the Perceived Other Vehicles (POVs) using circular areas to define the vehicles. If the areas intersect, a more accu- rately analysis is made using tighter and more representative areas of the vehicles to evaluate if a collision indeed is imminent. This approach is used to reduce the computational burden as well as analysing oncoming threats at an earlier stage. It is worth mentioning that this approach however is fully deterministic. In [17], a collision detection is performed by defining appropriate collision and col- lision free domains/areas. The collision areas are defined with respect to the center of both vehicles’ front bumper whereas the collision free area is defined according to the center of the POV’s rear bumper and the center of the EGO’s front. These areas are dependent on the length and the width of both vehicles and are later used to evaluate the eminency of a collision. The evaluation is conducted by computing the probability density over the area of interest with the use of the joint cumulative distribution. Note that their collision detection algorithm only can be applied in a specific traffic situation, namely a left turn by the POV across the EGO’s path. Instead of defining the collision area around the EGO, some research has been con- ducted on defining one or more collision areas on a map [2][3][22]. This means that, by making assumptions that each vehicle follows a specific path in a lane, the area where a collision can occur is the intersection of two (or more) lanes. Within each specific traffic intersection, the collision area would therefore be defined dependent on the possible intersecting paths of the vehicles. The objective of this work is to provide a collision detection algorithm suitable for all traffic situations, even if no map/road topology information is available. Since the positions of the vehicles are estimations, the idea is to use a probabilistic approach for evaluation of an imminent collision where [17] has worked as an inspiration. 4.2 Collision Definition There are a number of different ways to define a collision, as well as the area occupied by a vehicle. The chosen representation of a vehicle area is here based on sets. Simply speaking, when two vehicles are considered, the area of one of the vehicles is represented as a set bounded by the size of the vehicle, whereas the other vehicle is represented by a set, bounded by the size of the vehicle but also rotated according to both vehicles heading. To explain further, the first step is to define the sets representing the areas of both vehicles. The sets of the EGO and the POV, AEGO and APOV respectively, are 36 4. Collision detection bounded by their length and width relative their center point as: AEGO = { AxEGO : − l 2 ≤ AxEGO ≤ l 2 AyEGO : −w 2 ≤ AyEGO ≤ w 2 } (4.1) APOV = { AxPOV : − l̄ 2 ≤ AxPOV ≤ l̄ 2 AyPOV : − w̄ 2 ≤ AyPOV ≤ w̄ 2 } (4.2) where l is the length and w is the width of the EGO whereas l̄ and w̄ are the length and width of the POV, respectively. Furthermore, AxEGO and AxPOV are the restrictions in the x-axis, whereas AyEGO and AyPOV are the restrictions in the y-axis. An illustration is presented in Figure 4.2. AEGO APOV Figure 4.2: Illustration of sets representing the vehicle areas. Here l is the length and w is the width of the EGO, resulting in the EGO set AEGO, whereas l̄ and w̄ are the length and width of the POV representing the set, APOV . To be able to evaluate the collision risk, the next step is to define the vehicle areas as global sets of points. Since the measurements, and therefore also the predictions, of the positions are assumed to be the center point of the vehicle, the vehicle areas are defined around this point, e.g. (xEGO(i), yEGO(i)) the center point of the EGO in prediction step i. An illustration of the global sets, defined around each vehicle’s global cartesian position, can be seen in Figure 4.3. For each prediction step i = 1, 2, ..., npredictions (where npredictions is the number of predictions) the sets are then globally defined as: AglobalEGO (i) = { AxEGO(i) : xEGO(i)− l 2 ≤ AxEGO(i) ≤ xEGO(i) + l 2 AyEGO(i) : yEGO(i)− w 2 ≤ AyEGO(i) ≤ yEGO(i) + w 2 } (4.3) AglobalPOV (i) = { AxPOV (i) : xPOV (i)− l̄ 2 ≤ AxPOV (i) ≤ xPOV (i) + l̄ 2 AyPOV (i) : yPOV (i)− w̄ 2 ≤ AyPOV (i) ≤ yPOV (i) + w̄ 2 } (4.4) where xEGO(i), yEGO(i) and xPOV (i), yPOV (i) are the cartesian coordinates of the predicted position in prediction step i for the EGO and POV respectively. Fur- thermore, AxEGO(i) and AxPOV (i) are the restrictions in the global x-axis, whereas AyEGO(i) and AyPOV (i) are the restrictions in the y-axis. 37 4. Collision detection (xPOV,yPOV) (xEGO,yEGO) xglobal yglobal Figure 4.3: Illustration of definition of occupied areas by the vehicles, represented as sets. The sets, here for a specific i, are defined in global coordinates xEGO, yEGO and xPOV, yPOV, without any consideration of heading. The next step is to rotate the global sets (4.3) and (4.4) according to heading with the use of a rotation matrix. More specifically, first the sets are rotated according to the own vehicle’s heading (θEGO or θPOV ) and secondly, rotated according to the other vehicle’s heading (θPOV or θEGO). An illustration can be seen in Figure 4.4, where an example of a scenario is depicted in the leftmost figure. The next two figures show how the scenario is interpreted, where the green colour depicts the vehicles in neutral position (zero heading) according to (4.3) and (4.4) and red demonstrates the sets rotated according to heading. In the middle figure is the EGO in neutral position, whereas in the rightmost the POV is. These last two figures show two distinct representations of the global situation, each centered on the perspective of one of the vehicles. xglobalxglobal yglobalyglobal xglobal yglobal xEGO xPOV yEGO yPOV xEGO yEGO yPOV xPOV θEGO θPOV Figure 4.4: Illustration of the interpretation of a traffic scenario, where the scenario can be seen in the leftmost figure. The next two figures shows how the scenario is interpreted, where the green colour depicts the vehicles in neutral position (zero heading) according to (4.3) and (4.4) and red, the global sets rotated according to heading. In the middle figure is the EGO in neutral position, whereas in the rightmost the POV is. These last two figures show two distinct representations of the global situation, each centered on the perspective of one of the vehicles. With the two representations of the vehicle areas exp