Reference Path Estimation for Lateral Vehicle Control Master’s thesis in Systems, Control and Mechatronics Arni Thorvaldsson Vinzenz Bandi Department of Signals and Systems CHALMERS UNIVERSITY OF TECHNOLOGY Gothenburg, Sweden 2015 Master’s Thesis Ex041/2015 Reference Path Estimation for Lateral Vehicle Control Arni Thorvaldsson Vinzenz Bandi Department of Signals and Systems Chalmers University of Technology Gothenburg, Sweden 2015 Reference Path Estimation for Lateral Vehicle Control Arni Thorvaldsson Vinzenz Bandi © Arni Thorvaldsson & Vinzenz Bandi, 2015. Supervisor: Erik Nordin & Mansour Keshavarz, Volvo Group Trucks Technology Examiner: Lennart Svensson, Signals and Systems Master’s Thesis 2015 Department of Signals and Systems Chalmers University of Technology SE-412 96 Gothenburg Telephone +46 31 772 1000 Typeset in LATEX Gothenburg, Sweden 2015 iv Reference Path Estimation for Lateral Vehicle Control Arni Thorvaldsson Vinzenz Bandi Department of Signals and Systems Chalmers University of Technology Abstract Autonomous driving cars have been a hot topic in the media in recent years, with more and more tech companies and universities presenting projects with fully auto- mated vehicles. Most of these vehicles rely on highly sophisticated and expensive sensors that are not yet feasible for commercial vehicles. On the other hand auto- mated systems that are implemented in commercially available vehicles are largely limited to active safety scenarios where the system only assists the driver in danger- ous situations such as collision avoidance or lane support. The goal of this thesis project is to use sensors on commercially available vehi- cles for automating lateral control in selected scenarios. The evaluated scenarios are limited to roads where the sensors can detect lane markings or a preceding vehicle or both. The approach was to generate two different reference paths, one from lane markings and one from the preceding vehicle information. The lane marking path is generated from filtering measurements from the lane detection cameras using a nonlinear Kalman filter. The preceding vehicle path is generated by fusing the radar and camera measurements of the preceding vehicle. In order to develop and tune the filters a detailed analysis was done on the sensor measurements collected for this project. The implemented filters improve the current system in several ways. When the lane marking are lost for short period of time the prediction from the last mea- surement update can provide a reference while driving up to 40 meters. The path generated from the estimates of the preceding vehicle describe the trajectory which the vehicle has driven. This way a more accurate reference signal can be generated than using only the current position of the preceding vehicle, especially in turns and at long distances. By having two references paths the lateral control is more robust and an algorithm that takes the covariances of the estimation for both paths into account guarantees a smooth transition between them. Keywords: Lateral control, autonomous vehicles, sensor fusion, nonlinear kalman filtering. v Acknowledgements We would like to thank Volvo Group Truck Technologies for giving us the oppor- tunity to carry out this master thesis, Lennart Svensson for being our examiner and giving us valuable input, Erik Nordin and Mansour Keshavarz for being our supervisors at Volvo Group. We would also like to thank our group within Vehicle Dynamics & Active Safety for the help and support throughout the project. Arni Thorvaldsson, Vinzenz Bandi, Gothenburg, June 2015 vii Contents List of Figures xi List of Tables xv 1 Introduction 1 1.1 Background and motivation . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.3 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.4 Aims and objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.5 Thesis outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 2 Background 5 2.1 Sensor Fusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.1.1 Bayesian filter . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.1.2 Kalman Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.1.3 Cubature Kalman Filter . . . . . . . . . . . . . . . . . . . . . 7 2.1.4 Rauch-Tung-Striebel Smoother . . . . . . . . . . . . . . . . . 8 2.2 Control Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.2.1 Feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.2.2 Feedforward . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 3 Sensors 11 3.1 Object Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 3.1.1 Radar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 3.1.2 Camera . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 3.2 Lane Detection Systems . . . . . . . . . . . . . . . . . . . . . . . . . 12 3.3 GPS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 3.3.1 RACELOGIC VBOX System . . . . . . . . . . . . . . . . . . 13 3.4 Sensor Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 3.4.1 Object Detection . . . . . . . . . . . . . . . . . . . . . . . . . 13 3.4.2 Lane Detection . . . . . . . . . . . . . . . . . . . . . . . . . . 14 3.4.3 Vehicle State Sensors . . . . . . . . . . . . . . . . . . . . . . . 14 3.4.4 GPS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 4 Methods 17 4.1 Generating a reference with lane markings . . . . . . . . . . . . . . . 17 4.1.1 Generating a path from lane markings . . . . . . . . . . . . . 18 ix Contents 4.1.2 Coefficient and Sampled Lane Marking Measurement and Filters 19 4.2 Coefficient Filter for Lane Marking Path . . . . . . . . . . . . . . . . 19 4.2.1 Prediction step . . . . . . . . . . . . . . . . . . . . . . . . . . 19 4.2.2 Update step . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 4.3 Sampled Filter for Lane Marking Path . . . . . . . . . . . . . . . . . 20 4.3.1 Prediction step . . . . . . . . . . . . . . . . . . . . . . . . . . 20 4.3.2 Update step . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 4.4 Generating a reference with preceding vehicle . . . . . . . . . . . . . 24 4.4.1 Generating reference path . . . . . . . . . . . . . . . . . . . . 24 4.5 Preceding Vehicle Filter . . . . . . . . . . . . . . . . . . . . . . . . . 26 4.5.1 Prediction step . . . . . . . . . . . . . . . . . . . . . . . . . . 27 4.5.2 Update step . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 4.6 Control Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 4.7 Experimental Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 4.7.1 Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 4.7.2 Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 4.8 Lane Marking Sensor Analysis . . . . . . . . . . . . . . . . . . . . . . 31 4.8.1 Ground Truth . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 4.8.2 Evaluating the Measurements . . . . . . . . . . . . . . . . . . 32 4.9 Preceding Vehicle Sensor Analysis . . . . . . . . . . . . . . . . . . . . 33 4.9.1 Generating ground truth using RTS smoother . . . . . . . . . 33 5 Implementation 35 5.1 Simulink Implementation . . . . . . . . . . . . . . . . . . . . . . . . . 35 6 Results 37 6.1 Lane Marking Error Statistics . . . . . . . . . . . . . . . . . . . . . . 37 6.1.1 Absolute Error . . . . . . . . . . . . . . . . . . . . . . . . . . 37 6.1.2 Error Distribution . . . . . . . . . . . . . . . . . . . . . . . . 37 6.2 Lane Marking Filter Comparison . . . . . . . . . . . . . . . . . . . . 39 6.3 Lane Marking Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 6.3.1 Performance with original sensor data . . . . . . . . . . . . . . 41 6.3.2 Filter Performance With Added Noise . . . . . . . . . . . . . 43 6.3.3 Filter Performance Removed Measurements . . . . . . . . . . 44 6.4 Preceding vehicle filter analysis . . . . . . . . . . . . . . . . . . . . . 46 6.5 Preceding vehicle path . . . . . . . . . . . . . . . . . . . . . . . . . . 47 6.6 Outlier rejection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 7 Conclusion and Discussion 51 7.1 Implemented filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 7.2 Reference paths and error signals . . . . . . . . . . . . . . . . . . . . 52 7.3 Experimental implementation . . . . . . . . . . . . . . . . . . . . . . 52 8 Future Work 53 x Contents A Appendix 1 I A.1 Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . II A.2 Simulink . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IV xi Contents xii List of Figures 3.1 Visualization of ego vehicle and the two target vehicles with these signals. (black x,o) radar measurements. (green) position of the target vehicles from the Dgps System. (blue) path taken by the ego vehicle. (gray) lane markings. (magenta) field of view of the radar and the camera. (red) estimated positions (x on red line) of the vehicles from fused radar and camera measurements including position of the rear corners (connected by red line) and the far corner(x). (cyan) lane marking estimation from the camera. . . . . . . . . . . . . . . . 11 4.1 Overview figure that illustrates the sensor measurements and the paths that are generated using methods implemented in this thesis. . 17 4.2 Shows how the path is sampled for the filter. Like the lane marking measurements the sampled path is described in the vehicle coordinate system (Vcs) (xv, yv). The samples are distributed equidistantly in x at distance ∆. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 4.3 Moving the coordinate system in the nonlinear function f(x,v, ϕ̇,dt)in the prediction step . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 4.4 Update estimates to moved coordinate system in the nonlinear func- tion f(x,v, ϕ̇,dt)in the prediction step . . . . . . . . . . . . . . . . . 23 4.5 interpolation/extrapolation to obtain prediction points in the nonlin- ear function f(x,v, ϕ̇,dt)in the prediction step . . . . . . . . . . . . 23 4.6 Initial path from preceding vehicle when a vehicle enters the field of view. A) shows how the N points in the path are generated if a lane marking path is available by offsetting the lane marking path and adding the preceding vehicle position as the Nth point and B) shows how the points are generated when no lane marking path is available by linear interpolation between the ego vehicle position and the preceding vehicle position. . . . . . . . . . . . . . . . . . . . . . . 25 4.7 Updating the preceding vehicle path. Each time the distance be- tween the newest point in the path and the preceding vehicle is larger than deltaPrec the point furthest away from the preceding vehicle is removed and the preceding vehicle position is added as new point. . . 26 4.8 Lateral and heading error to the path obtained from the sampled filter estimates in the Vcs. . . . . . . . . . . . . . . . . . . . . . . . . 29 xiii List of Figures 4.9 (left) plot of the trajectory the vehicle drove during the measure- ment run in an Enu coordinate system. (right) Cutout of the ground truth of the lane markings generated with Gps data, lane marking measurements using a filter and and Rts-smoother. . . . . . . . . . . 32 4.10 Visualization of the ground truth of the lane markings and a corre- sponding measurement in the VCS. . . . . . . . . . . . . . . . . . . . 33 4.11 Lateral position of the preceding vehicle. The red and green dots are the camera and radar lateral measurements respectively. The blue line is the estimated lateral position and the black line is the smoothed position or the Ground Truth. . . . . . . . . . . . . . . . . 34 5.1 Block diagram of the Simulink implementation of the algorithm. A detailed picture of the Simulink implementation can be found the appendix. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 6.1 Absolute error between lane marking measurements and ground truth at different look ahead distances for the left (top plot) and right (mid- dle plot) lane marking during one measurement run. The curvature of the road (bottom plot) as reference to evaluate a correlation between road curvature and measurement error. . . . . . . . . . . . . . . . . . 38 6.2 Distribution of left lane marking measurement error. All the sets are normally distributed, the red curves are the Gaussian probability density functions with mean values and standard distributions found in table 6.1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 6.3 Distribution of right lane marking measurement error. All the sets are normally distributed, the red curves are the Gaussian probability density functions with mean values and standard distributions found in table 6.1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 6.4 The figures illustrate the performance of the prediction step in sam- ple filter and coefficient filter. The gray lanes are the actual measure- ments, red line is the measurements of left and right lanes averaged, green circles are the estimates from the sampled filter and the blue line is the estimate from the coefficient filter. . . . . . . . . . . . . . 40 6.5 Error to ground truth for the unprocessed lane marking path. (top left) error at look ahead distances x = 0m, 10m, 20m, 30m, 40m, (top right) zoomed part of error, (bottom plots) histograms for the distri- bution of the errors with colors corresponding to signals in the top plots. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 6.6 Error to ground truth for the filtered lane marking path. (top left) er- ror at look ahead distances x = 0m, 10m, 20m, 30m, 40m, (top right) zoomed part of error, (bottom plots) histograms for the distribution of the errors with colors corresponding to signals in the top plots. . . 42 6.7 (top) lateral control signal generated from ground truth, unprocessed lane path measurements and filtered lane marking path, (bottom left) error to ground truth signal, (bottom right) probability density func- tions for errors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 xiv List of Figures 6.8 Error of the lateral control signal to ground truth for unprocessed measurement path and filtered path with added 10% noise to the measurements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 6.9 Error of the lateral control signal to ground truth for unprocessed measurement path and filtered path with added 20% noise to the measurements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 6.10 Error of the lateral control signal to ground truth for unprocessed measurement path and filtered path with 20 % of measurements re- moved from the path. When no measurement is available the lateral control signal for the unprocessed measurement path is set to 0. . . . 45 6.11 Error of the lateral control signal to ground truth for unprocessed measurement path and filtered path with 20 % of measurements re- moved from the data set. When no measurement is available the lateral control signal for the unprocessed measurement path is kept constant at the last value. . . . . . . . . . . . . . . . . . . . . . . . . 45 6.12 Error of the lateral control signal to ground truth for unprocessed measurement path and filtered path with 85 % of measurements re- moved from the data set. When no measurement is available the lateral control signal for the unprocessed measurement path is kept constant at the last value. . . . . . . . . . . . . . . . . . . . . . . . . 46 6.13 Error of the lateral control signal to ground truth for unprocessed measurement path and filtered path with 85 % of measurements re- moved from the data set. When no measurement is available the lateral control signal for the unprocessed measurement path is set to 0. 46 6.14 The paths that are generated for the lateral control. The green path is the lane marking reference path, blue line is the preceding vehicle path, blue circle is the current estimated position of the preceding vehicle, red cross is the look ahead point and the gray lanes are the right and left lane markings. . . . . . . . . . . . . . . . . . . . . . . . 48 6.15 The radar and camera measurement of lateral position of the pre- ceding vehicle. The standard deviation for the measurements that is estimated by the sensor can be seen in the lower plot. . . . . . . . . 49 A.1 Detailed map of the testing area AstaZero . . . . . . . . . . . . . . . II A.2 Simulink implementation of the algorithm proposed in this thesis . . IV xv List of Figures xvi List of Tables 4.1 The sensor information logged during the scenarios . . . . . . . . . . 30 6.1 Distributions of the lane marking measurement errors. Left and right lane separate and both lanes combined. . . . . . . . . . . . . . . . . . 39 6.2 Mean values and standard deviations of the errors for the datasets with added noise. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 6.3 Mean values and standard deviations of the errors for the data sets with removed samples. comparison between filtered lane marking path, unprocessed measurement path with control signal set to 0 when measurement is removed and unprocessed measurement path with control signal kept constant when measurement is removed. . . . . . . 47 6.4 Mean square error of the lateral position. The error is between the ground truth and filter estimates with different time step sizes (dt). . 47 xvii List of Tables xviii 1 Introduction This chapter will give a brief overview of the thesis project and relevant topics. Problem statement, aims and objectives will also be presented. 1.1 Background and motivation The availability of driver assistance systems and autonomous functions has increased rapidly for production vehicles in recent years. Current systems and functions are being developed for active safety and aim to assist the driver in selected driving situations. Examples of such systems are lane keeping assistance, collision warning, emergency braking and adaptive cruise control. To further develop the technology and step closer towards fully autonomous vehicles several research projects have been initiated. At Volvo Cars "Drive Me" is an autonomous driving project in which 100 self-driving cars will be used on selected roads in and around Gothenburg. The first car is expected to be on the roads by 2017. Another project Sartre [1] seeks to improve comfort, safety and energy consumption by developing platooning solu- tions. A platoon can be explained as a group of vehicles that drive in a coordinated formation. In Sartre the leading vehicle is manually driven while the following vehicles follow it fully autonomously. By using vehicle to vehicle (V2V) communi- cation together with the vehicles sensors the gap between the vehicles can be kept small which leads to reduced fuel consumption and less traffic congestion. One of the main obstacles in recent projects and autonomous driving in general is how to deal with inaccurate and unreliable sensors. Within the department of Advance Technologies and Research (Atr) at Volvo Group Trucks Technology (Gtt) are several projects focused on self-driving trucks. In some of these projects the truck currently relies on a unprocessed sensor measurements for lateral control which can cause problems when the conditions for the sensor are not ideal. The simplest way to get a reference for a lateral control is to use the sensor signal directly. This method is far from optimal and there is a big room for improvement in how the sensor data are handled. A simple filter that smooth’s the sensor measurements yields better results. An optimal way is to use filters based on Bayesian statistics. The most common technique to implement Bayesian filter is the Kalman filter, invented in 1950s by Rudolph Emil Kalman [2]. The Kalman filter uses a process model that predicts what the new measurement is expected to be. The estimate is then derived by using the prediction, the measurement and the uncertainty of both of them. When more than one sensor source is available, the optimal way is to derive the estimate from all of them. Also when the sensor 1 1. Introduction source is not available for a short period, the prediction can be used as reference. How to handle sensor data in a optimal way using all information available was the motivation for this master thesis. 1.2 Problem Statement The performance of lateral control is directly connected to the characteristics and ability of the used sensor. The sensors that are commercially available today are either not accurate enough or not reliable under all circumstances to be used indi- vidually. In order to have a robust lateral control all available sensor sources need to be used optimally. In this project a radar sensor, camera sensor and vehicle state sensors will be fused together to generate path for robust lateral control. 1.3 Applications The sensors used in this project are limited to devices that exist in commercially available vehicles. These sensors are not capable of providing enough information for lateral control in complex urban scenarios. Therefore only selected scenarios are taken into account. Most of the scenarios take place on highways or roads where few unexpected things can occur. The work done in this thesis will provide a good basis on how to handle sensor data for lateral control in different applications. Some of the applications that are currently under development at Volvo ATR are described here together with how the reference for lateral control can be obtained in each of them. Autopilot Autopilot is a autonomous driving system for highways. The driver is sup- posed to monitor the system and be prepared to take over the control if the system has technical problems or dangerous situations appear in front of the vehicle. The lateral control can be done by using observations of lane mark- ings, position of preceding vehicle and/or Gps measurements together with a map. Platooning Platooning is when multiple vehicles drive in unison, both laterally and longi- tudinally. The leading vehicle is usually manually driven but can also be driven by an autopilot system. The following vehicles are driven autonomously with predefined longitudinal distances between each vehicle. Similar to autopilot the lateral control can be done using the same references but vehicle to vehicle communication can be added to ensure string stability within the platoon [3]. Queue Assist In traffic jam the queue assist system can take over longitudinal and lateral control of the vehicle to relieve the driver. The lateral control can be done using observations of lane markings and position of preceding vehicle. 2 1. Introduction 1.4 Aims and objectives The aim of this thesis is to improve the robustness of vehicle lateral controller that currently is under development for several projects at Volvo Gtt. To fulfill the aim the following scientific question needs to be answered: "Can multiple sensor input improve robustness of lateral control for automated driv- ing on highways?" The objectives are to collect measurement data from available sensors that can be used for lateral control, analyze the data and use the findings to develop a sensor fusion algorithm that makes use of the sensors optimally to generate reference for lateral control. The algorithm will be tested with simulations and on a test track. 1.5 Thesis outline The report is split into 8 chapters. Following the introduction, chapter 2 contains background theory and related work. In chapter 3 the production and prototype sensors considered in this thesis are listed. Each sensors functions are described and how they are used in this project.. Also each sensor main characteristics and outputs described. Chapter 4 describes the methods used for sensor analysis and the development of filters and algorithms. Chapter 5 contains a description of the algorithm implemented for testing on the test truck. Chapter 6 presents some relevant results from the data analysis and algorithm implementation. In chapter 7 the conclusion are presented and discussion of main findings. Finally, the future work is proposed in chapter 8, with improvements of the current algorithm and outlook on other methods to further improve lateral control. 3 1. Introduction 4 2 Background 2.1 Sensor Fusion Methods in how to handle sensor data by combining different sensors often together with process model is a term called sensor fusion. By adding more information about a quantity measured by sensors or predicted by process model a more accu- rate estimate on the true quantity can be achieved than relying on a single source of information Sensor fusion is often referred to in probabilistic robotics [4]. A probabilistic al- gorithm uses the dynamics of the robot and the measurements of its sensors. The dynamics can be modeled with a process model and state transition distribution de- termined from the uncertainties in the state evolution. The sensors on the robot have a certain noise on the measurements which can be quantified and the measurement distribution found. Combining the transition distribution and the measurement dis- tribution in a statistical way e.g. using recursive Bayesian filter the estimate of the actual state can be derived. 2.1.1 Bayesian filter A Bayesian filter calculates posterior density of the state recursively which represents a complete statistical description of the state at each time instance [4]. That is an assumption that the Bayesian filter makes called Markov assumption. Markov assumption specifies that if the current state xk|k is known then the past and the future data are independent, i.e. the current state is sufficient representation of the past states. The posterior density is calculated when a new measurement is available in two steps: Time update In the time update step the Bayesian filter calculates the predicted mean x̂k|k−1 and the predicted covariance Pk|k−1 using the process model and the control input. The process model is often called the prediction model. The prediction model is usually a set of mathematical equations of the state evolution or how the systems states change from one time instance to the next one. The uncertainties of the state evolution come from the fact that modeling the dynamics accurately is often difficult and not computationally efficient and therefore approximations have to be made. 5 2. Background Measurement update The update model calculates the mean x̂k|k and covariance Pk|k by using the prediction of the state, the measurements and their distributions. The sensors and the states are connected through the measurement model which describes how the sensors observe the states. 2.1.2 Kalman Filter Kalman filter is the most common technique to implement a Bayesian filter. It was developed by Rudolph Emil Kalman [2] and published in 1960. The Kalman filter uses moments representation of Gaussian distribution. The estimates at sample k is represented with the mean x̂k and the covariance Pk, in probabilistic terms the model is p(xk|y1:k) = N (xk; x̂k|k, Pk|k). To make sure that the correct posterior is calculated and it is Gaussian following properties have to hold: • The initial belief at x0 must be normally distributed. • Kalman filter assumes linear dynamics thus the state transition probability p(xk|uk, xk−1) must be linear function with additive Gaussian noise: xk = Akxk−1 +Bkut + qk (2.1) where xk is the state vector, Ak is the state transition matrix and qk is the additive Gaussian noise that represents the uncertainty in the state transition. • The measurement probability p(yk|xk) must be linear with Gaussian added noise: yk = Ckxk + rk (2.2) where yk is the measurement vector, Ck is the measurement matrix and rk is the added Gaussian noise. Systems that fulfill these properties are called linear Gaussian systems. The Kalman filter algorithm can be seen in equation 2.3 and 2.4. In equation 2.3 the state transition is implemented where the estimate prediction x̂k|k−1 and the covariance Pk|k−1 is calculated before incorporating the measurements. Prediction : x̂k|k−1 = Ak−1x̂k−1|k−1 Pk|k−1 = Ak−1Pk−1|k−1AT k−1 + Qk−1 (2.3) The update step is shown in equation 2.4 where Kk is the Kalman gain, vk is the innovation, Sk is the innovation covariance, x̂k|k is the new estimate and Pk|k is the covariance. The innovation checks how much new information is in the mea- surement and the Kalman gain determines how much the new measurement should be incorporated into the new state estimate. The moments x̂k|k and Pk|k are then updated. 6 2. Background Update : Kk = Pk|k−1 −HT S−1 k vk = yk −Hx̂k|k−1 Sk = HPk|k−1HT + Rk x̂k|k = x̂k|k−1 + Kkvk Pk|k = Pk|k−1 −KkSkKT k (2.4) When considering linear Gaussian dynamic systems, the Kalman filter provides op- timal solutions for estimating the states in minimum mean squared error sense and is computationally efficient. But often systems are nonlinear and not Gaussian and therefore it is not possible to obtain closed form solutions for the estimates. In those cases linear Kalman filter is not applicable and a nonlinear filter is needed that makes approximation in order to derive the estimates. 2.1.3 Cubature Kalman Filter Cubature Kalman filter(Ckf) uses a spherical cubature rule for the approximation of an Gaussian filter [5] and can therefore be applied on nonlinear systems. The Ckf is derivative free, which means there is no need to find the jacobian or hessian and would then not have problems with divergence. In the Ckf prediction step (see equation (2.5)) 2n sigma points X are generated where n is the number of states. The sigma points are then propagated through the process model in order to calculate the moments. The moments are the predicted mean x̂k|k−1 and the predicted covariance Pk|k−1. Prediction : σ − points : X (i) k−1 = x̂k−1|k−1 + √ n(P(1/2) k−1|k−1)i i = 1, 2, ...n X (i+n) k−1 = x̂k−1|k−1 − √ n(P(1/2) k−1|k−1)i i = 1, 2, ...n predicted moments : x̂k|k−1 ≈ 1 2n 2n∑ i=1 f(X (i) k−1) Pk|k−1 ≈ 1 2n 2n∑ i=1 f(X (i) k−1 − x̂k|k−1)(·)T + Qk−1 (2.5) In the update step (see equation (2.6)) new sigma points are generated using the predicted moments. The sigma points are then propagated through the measurement model and the predicted measurements ŷk|k−1 are found. The innovation covariance matrix Sk and the cross covariance matrix Pxy are estimated. Finally the estimate of the state are updated in the mean x̂k|k and the covariance Pk|k. Note that the PxyS−1 k is the Kalman gain. 7 2. Background Update : σ − points : X (i) k = x̂k|k−1 + √ n(P(1/2) k|k−1)i i = 1, 2, ...n X (i+n) k = x̂k|k−1 − √ n(P(1/2) k|k−1)i i = 1, 2, ...n desired moments : ŷk|k−1 ≈ 1 2n 2n∑ i=1 h(X (i) k ) Pxy ≈ 1 2n 2n∑ i=1 (X (i) k − x̂k|k−1)h(X (i) k − ŷk|k−1)T Sk ≈ 1 2n 2n∑ i=1 (h(X (i) k − ŷk|k−1))(·)T + Rk x̂k|k = x̂k|k−1 + PxyS−1 k (yk − ŷk|k−1) Pk|k = Pk|k−1 −PxyS−1 k PT xy (2.6) 2.1.4 Rauch-Tung-Striebel Smoother The Rauch-Tung-Striebel (Rts) smoother is a smoothing algorithm for linear and Gaussian systems and was develop by Rauch, Tung and Striebel in the 1960s [6]. The Rts smoother is a forward-backward smoother which means it filters forward using Kalman filter (see section 2.1.2) computing p(xk|y1:k) for k = 1, 2, 3...K and smooth’s backward computing p(xk|y1:K) for k = K − 1, K − 2, K − 3...1. The predicted moments and moments have to be saved for each time instance in the Kalman filtering step. The probabilistic model for Rts smoothing is p(xk|y1:K) = N (xk; x̂k|K , Pk|K) and the algorithm can be seen in equation 2.7 Gk = Pk|kAT k P−1 k+1|k x̂k|K = x̂k|k + Gk(x̂k+1|K − x̂k+1|k) Pk|K = Pk|K −Gk(P̂k+1|K − P̂k+1|k)GT k (2.7) 2.2 Control Theory 2.2.1 Feedback When controlling the lateral position of a vehicle, the most frequently used approach is to control the lateral error ∆y at a look ahead distance L to zero [7] [8]. One way of obtaining ∆y is to estimate the lateral deviation between the road and the center of gravity (CG) of the vehicle (∆yr) and the yaw angle with respect to the road (εr)and extrapolating to L (∆y = ∆yr + L · εr) [9]. In [10] ε is used instead of εr to calculate ∆y when Dgps provides the lateral reference. The equation to find ε becomes ε = εr + β where β is the slip angle of the vehicle, which means that ε 8 2. Background is the angle of the velocity vector with respect to the road. These methods do not take the property of the path into account. If the path is available (e.g. when using a map or path planning) ∆y can be defined as the lateral deviation from the path at the look ahead distance. In addition to the yaw angle, the yaw dynamics ε̇r and β̇ can be used to predict the lateral displacement caused by the curve the vehicle is taking [11]. Vision based systems can measure ∆y at a look ahead distance directly. The lateral deviation ∆y is used in the feedback control of the lateral controller. The look ahead distance L also has influence on the controller, a large L maintains stability of the control and dampens yaw dynamics, whereas at slow speeds a large L can cause cutting of sharp corners. 2.2.2 Feedforward In addition to a feedback control, the controller performance can be improved by adding a feedforward component. Tai and Tomizuka noted the analogy between a vehicle lateral control system on curved road an a mechanical system with coulomb frictions, and proposed feedforward compensation based on road curvatures to im- prove the tracking error without compromising too much of other system perfor- mances [12] [9]. The feedforward control is usually based on the road curvature and the vehicle model. To estimate the road curvature [8] implemented an observer where the road curvature is included in the state vector. In Hasegawa and Konaka paper [13], they propose a method to estimate road curvature with three look-ahead points. As this is done in simulation it does not offer a method on how to obtain these look-ahead points in a real world application. 9 2. Background 10 3 Sensors The trucks used by the Advanced Technology and Research (Atr) department at Volvo Group are equipped with a variety of sensors that can be used in this project. This chapter describes the different sensors, their output signals and how they are used to generate a lateral control signal. A variety of different sensors were analyzed. These sensors include radar and camera systems for object detection as well as lane detectors, Gps and vehicle state sensors. Figure 3.1 shows a visualization of different measurements from sensors that are mounted on a truck. 1710 1720 1730 1740 1750 1760 1770 1780 860 870 880 890 900 910 920 Scenarios of 3 vehicles on a East−North plane with origin south west of Hallered. time: 15.9912 East [m] N or th [m ] Figure 3.1: Visualization of ego vehicle and the two target vehicles with these signals. (black x,o) radar measurements. (green) position of the target vehicles from the Dgps System. (blue) path taken by the ego vehicle. (gray) lane markings. (magenta) field of view of the radar and the camera. (red) estimated positions (x on red line) of the vehicles from fused radar and camera measurements including position of the rear corners (connected by red line) and the far corner(x). (cyan) lane marking estimation from the camera. 11 3. Sensors 3.1 Object Detection The trucks have different sensors that are able to detect objects in the surrounding of the truck. The evaluated sensors include radar and camera. All of the evaluated sensors are mounted in front of the truck looking forward. 3.1.1 Radar Radars can detect different objects in front of the Ego vehicle. The radar systems used in the automotive industry outputs preprocessed data from the radar measure- ments. The systems evaluated in this project can detect and track objects in their field of view and classify those. Measurements include the position and velocity of tracked objects in the Ego vehicle coordinate system (Vcs). Classifications can range from "moving" or "stationary" to more sophisticated classifications such as different kinds of vehicles and stationary objects depending on their size and radar reflectivity. 3.1.2 Camera Cameras have become increasingly important sensors in the automotive industry. With more sophisticated image processing algorithms and more powerful processors, cameras are very suitable for object detection. Similar to the radar the cameras used in the automotive industry detect different vehicles and objects in the vicinity of the Ego vehicle. The image processing algorithms detects and follows different objects and classifies them. More modern cameras also detect traffic signs or traffic lights as well as turn and brake signals of preceding vehicles. 3.2 Lane Detection Systems Lane marking measurement systems are based on cameras and detects lane markings in the field of view of the camera. The measurements from such systems describe lane markings in the Ego Vcs. The lane marking measurements can either be represented as clothoids (z = [∆y, α, c0, c1]T ) where every lane marking measurement has a lateral deviation from the vehicle (∆y), a heading (α), a curvature (c0) and a curvature rate (c1). Or the measurements are represented as polynomials in the Vcs with the following form. y(x) = a0 + a1 ∗ x+ a2 ∗ x2 + a3 ∗ x3 (3.1) Clothoid lane markings can be approximated to third order polynomials by using the following approximation. y(x) = ∆y − α · x+ c0 2 x 2 + c1 6 x 3 (3.2) 12 3. Sensors 3.3 GPS 3.3.1 RACELOGIC VBOX System For test purposes Vbox data loggers by Racelogic are available and can be mounted on the vehicles. The Racelogic Vbox is a Gps data logger that can be used in different setups. To use the system the vehicles have to be equipped with two Gps antennas and radio communication antennas. When one Vbox system is used as stand alone system absolute Gps coordinates can be logged with an accuracy of 2m. When two vehicles are equipped with a Vbox system each (moving base setup), the relative positions between the vehicles can be detected with an accuracy of ±2cm while the absolute positions remain at a 2m accuracy. The Vbox can also be used in conjunction with a stationary base station. With the base station the absolute position of a vehicle can be logged with an accuracy of ±2cm. The use of the bases station is limited by the radio communication range between the vehicle and the station which is a few hundred meters. 3.4 Sensor Setup The goal for this project was to develop an application that could be used on a commercially available vehicle. Therefore the sensors used as inputs to the developed algorithms are all systems that are available in commercial vehicles. The following sensor systems have been used in the development of the algorithms. • radar for object detection • camera for object detection • camera for lane detection • vehicle state sensors for dead-reckoning Apart from the sensors used in the algorithms a Gps system was used was used for generating ground truth data to test the algorithms. 3.4.1 Object Detection Two different systems for object detection were used in this project, a radar based system and a camera based system. The radar system can detect and track six different objects in its field of view and classify them into stationary or moving. The measurements include object position and velocity in the Ego Vcs including the standard deviations for the measurements. In addition to the radar a camera system is used that can detect six different objects and measure the objects positions in the Ego Vcs. The data from the camera and the radar are fused together to find an accurate estimated position of the preceding vehicle. This is described in more detail in section 4.5. A statistical analysis of the sensor data can be found in section 6.4. 13 3. Sensors 3.4.2 Lane Detection The lane detection system used in this project is capable of measuring two different lane markings, one on each side of the vehicle. The lane markings are measured in the clothoid representation. The measurements and approximations of the left and right lane markings can be seen in equation (3.3) where xv is the position on the x-axis of the Vcs. Measurements : zl = [∆yl, αl, cl 0, c l 1]T zr = [∆yr, αr, cr 0, c r 1]T Approximations : yl(xv) = ∆yl − αl · xv + cl 0 2 x 2 v + cl 1 6 x 3 v yr(xv) = −∆yr − αr · xv + cr 0 2 x 2 v + cr 1 6 x 3 v (3.3) The functions yl(x) and yr(x) describe the lateral position of the lane marking as a function of xv in the Ego Vcs. The sign of ∆yr is negative due to the property of the camera. The measurement is positive although the lateral deviation of the right lane marking is negative in the Vcs. The heading angle α is the heading between the lane marking and the vehicle being positive in counter clock wise direction (i.e. it has the opposite to the heading in the Ego Vcs), therefore the coefficient is neg- ative sign in equation (3.3). The coefficients c0 and c1 are are positive for curvatures and curvature rates in counter clock wise direction. For each measurement of a lane marking the camera generates a confidence value between 0 and 10. The measure- ments are used to generate a path in the middle of the two detected lane marking. The path is used to generate a lateral reference signal that the vehicle controller is supposed to follow. The filter algorithm with a Ckf prediction and a linear Kalman update is described in more detail in section 4.3. A statistical analysis of the sensor data can be found in section 6.1. 3.4.3 Vehicle State Sensors All the trucks are equipped with a variety of of internal sensors that monitor the states of the vehicle. The measurements of these sensors are available on the inter- nal Can (Controller Area Netork) buses of the trucks. In the algorithms developed within this project, the vehicle state sensors are used for dead-reckoning. Two differ- ent sensors are used for this part of the algorithms. The velocity measurement from the tachometer is used to estimate the distance the vehicle has traveled. The yaw rate measurement from the gyroscope is used to estimate the the vehicles rotation. In the filters described in chapter 4 the dead reckoning algorithm is explained in more detail. 3.4.4 GPS The Vbox system is used in generating ground truth data for the lane marking measurements. To generate those the Gps data are combined with the lane marking 14 3. Sensors measurements, filtered with a Kalman filter and smoothed with a Rts-smoother. This is described in section 4.1. In all the measurements done for this project the Vbox system was configured in a moving base setup without stationary base station. 15 3. Sensors 16 4 Methods EGO vehicle Preceding vehicle Lane marking measurement Field of view Lane marking path Preceding vehicle path Figure 4.1: Overview figure that illustrates the sensor measurements and the paths that are generated using methods implemented in this thesis. 4.1 Generating a reference with lane markings Lane markings can be used to generate a path on the road that the vehicle can follow by filtering the lane marking measurements. There are two main methods in which lane markings are represented by lane detection cameras. In the first method the camera outputs a third order polynomial that describes the lane marking in the vehicle coordinate system. The vehicle coordinate system has its x-axis in the direction of the heading of the vehicle. y(x) = a0 + a1 · x+ a2 · x2 + a3 · x3 (4.1) The second method describes the lane marking as a clothoid with a lateral deviation (∆y), a heading (α), a curvature (c0) and a curvature rate (c1). A lane marking described as a clothoid can be approximated with the following third order polyno- mial [14]. y(x) = ∆y + α · x+ c0 2 x 2 + c1 6 x 3 (4.2) 17 4. Methods The lane detection camera used in this project describes the lane markings as clothoids. It is able to detect two lane markings, one on the left side and one on the right side of the vehicle. Because the way the camera provides the coeffi- cients, the exact polynomials for the left and right lane marking measurements look as follows. yl(x) = ∆yl − αl · x+ cl 0 2 x 2 + cl 1 6 x 3 yr(x) = −∆yr − αr · x+ cr 0 2 x 2 + cr 1 6 x 3 (4.3) 4.1.1 Generating a path from lane markings With the measurement of two lane markings a path in the center of the lane can be generated for the vehicle to follow. For practical purposes the path can also be offset from the center of the lane if this is desired, adding a constant to the coefficient ∆yp shifts the path parallel. The lane marking measurements are in the form of zl = [∆yl, αl, cl 0, c l 1]T and zr = [∆yr, αr, cr 0, c r 1]T where zl and zr are measurements from the left lane and right lane respectively. In addition to the measurement the used lane detection camera provides a confidence value (confl, confr) between 0 and 10 for each measurement. Before a filter is applied the measurements from left and right land are combined to the path zp = [∆yp, αp, cp 0, c p 1]T which describes the path in the middle of the lane. There are the following cases to be considered when calculating zp Both lanes are detected If both lane markings are detected the average between the coefficients can be calculated. If confl is equal to confr then ∆yp = ∆yl − ∆yr. The sign of ∆yr is negative due to the property of the camera. The measurement is positive although the lateral deviation of the right lane marking is negative in the Vcs. The other coefficients are then averaged. If confl and confr are not equal then ∆yp is calculated the same way but the other coefficients are the weighted averages of the respective coefficients in zl and zr where confl and confr are used as the weights. One lane marking is not detected for a short time In this case the path in the center of the lane markings can only be generated if the width of the lane is known. The width of the lane can be measured during the time when both lane markings are detected and it can be assumed that the lane width does not change over a short period of time. The coefficients [αp, cp 0 and cp 1]T are equal to the coefficients from the detected lane marking. Only one lane marking is detected In this case the width of the lane has to be assumed and the path can follow the lane marking at a predefined distance. The coefficients αp, cp 0 and cp 1 are the same as from the detected lane marking. 18 4. Methods No lane marking is detected The measurement is discarded. 4.1.2 Coefficient and Sampled Lane Marking Measurement and Filters There are two different ways to represent and filter a path generated from lane markings. One way is to have a coefficient representation like the ones in equations (4.1) and (4.2) where the lane marking function y(x) has a value for all values x between −∞ and +∞. This representation describes the lane markings between certain bounds on x between 0 and around 50 m. The second way is to represent the path as sampled points in x and y similar to the methods proposed in Fernándes et.al. paper [15]. The following chapters describe both principles in more detail. For both methods a filter method is presented. Ultimately it was decided to go forward with the sampled version as it is less sensitive to coefficient noise [14] and the performance of the sampled filter has proven to be better in tests than the coefficient filter (see 6.2). 4.2 Coefficient Filter for Lane Marking Path The first filter is a coefficient filter which is a Kalman filter with both linear process and measurement model. The development of this filter was discontinued after initial test showed that the sampled filter described in section 4.3 performed better as seen in section 6.2. 4.2.1 Prediction step Using the third order polynomial that approximates a clothoid in equation 4.2 the motion model can be derived. The states that are predicted with the motion model are the lateral deviation (∆y), the heading (α), the curvature (c0) and the curvature rate (c1) of the averaged lane marking. The movement and change in orientation of the truck is calculated using measurements of the velocity(v), yaw rate (ϕ̇) and time between lane marking measurements(dt). The distance traveled between each prediction is then x = v · dt and the change in orientation is dx dy = atan(−ϕ̇ · dt). Since the angles are very small during highway driving, small angle approximation can be applied which gives dϕ = −ϕ̇ · dt. The prediction model can be seen in 19 4. Methods equation 4.4, xv is the position on the x-axis of the Vcs. x(k + 1) = A · x(k) +B · u where : x(k) =  ∆y α C0 C1  u = ϕ̇ A =  1 xv x2 v 2 x3 v 6 0 1 xv x2 v 2 0 0 1 xv 0 0 0 1  B =  0 −dt 0 0  (4.4) 4.2.2 Update step Since all the states are measured the update is linear with the measurement matrix H as identity matrix. When the confidence value (confl, confr) are both below confidence threshold (set to 3) the measurement are considered not valid and in that case the update step is skipped. x̂k|k = x̂k|k−1 + Kk(yk −Hx̂k|k−1) H = In Pk|k = Pk|k−1 −KkSkKT k Kk = Pk|k−1 −HT S−1 k Sk = HPk|k−1HT + Rk Rk : Measurement noise covariance (4.5) 4.3 Sampled Filter for Lane Marking Path The sampled filter is a Kalman filter with a nonlinear process model for the pre- diction and a linear measurement model for the update. For the prediction step a Cubature Kalman filter is used. Since the measurement model is linear the update is a direct Kalman update step. It was decided to go forward with this filter in this project as it performed better than the coefficient filter. 4.3.1 Prediction step The prediction step of the filter estimates the positions of the sampled path at the time of the next lane marking measurements. It does that by applying the nonlinear 20 4. Methods yv xv Δ Δ Δ path y(x=Δ,2Δ...) Figure 4.2: Shows how the path is sampled for the filter. Like the lane marking measurements the sampled path is described in the vehicle coordinate system (Vcs) (xv, yv). The samples are distributed equidistantly in x at distance ∆. function f(x,v, ϕ̇,dt) to the previous estimation. All the formulas for the Cubature Kalman prediction can be found in (4.8). Nonlinear prediction function f(x,v, ϕ̇,dt) The nonlinear function for the prediction step is divided in three steps. 1) Move coordinate system Using the time elapsed since the previous measurement (dt), the yaw rate (ϕ̇) and the velocity of the vehicle (v), the coordinate system (xv k−1, y v k−1) is moved to the new position (xv k, y v k). The coordinate system is first moved along the x-axis by dx = v ·dt and then rotated around the z-axis by dϕ = ϕ̇ ·dt (Figure 4.3. 2) Update estimates The estimates xk−1 = [y0 k−1, y 1 k−1, ...] are moved to the new coordinate system. Since this transformation and a rotation the updated points pi have both x and y values (Figure 4.4). pi x = (∆ · i− dx) · cos(−dϕ)− yi k−1 · sin(−dϕ) pi y = (∆ · i− dx) · sin(−dϕ) + yi k−1 · cos(−dϕ) i = 0, 1, ...n n : nr. of points in discretized path minus 1 (4.6) 3) Interpolation/Extrapolation The prediction points are calculated by linearly interpolating from pi to x = 21 4. Methods xvk-1 xv k yvk-1 yv k path yk-1(x=Δ,2Δ...) dx dφ Figure 4.3: Moving the coordinate system in the nonlinear function f(x,v, ϕ̇,dt)in the prediction step [0,∆, 2∆...]. (Figure 4.5). yi k = pi y + ((∆ · i− pi x) · (pi+1 y − pi y))/(pi+1 x − pi x) i = 0, 1..n− 1 yn k = pi y + ((∆ · n− pn x) · (pn y − pn−1 y ))/(pn x − pn−1 x ) (4.7) Since it is not possible to interpolate a value for the last x value, the last point has to be extrapolated. In the current implementation this is done with a linear extrapolation through pn−1 and pn. On alternative method would be to extrapolate by setting the angle ^pn−1, pn, p(∆ · n, yn k ) equal to ^pn−2, pn−1pn Since f(x,v, ϕ̇,dt) is nonlinear the prediction step is done with a Cubature Kalman filter. X (i) k−1 = x̂k−1|k−1 + √ n ( P(1/2) k−1|k−1 ) i , i = 1, 2, ..., n X (i+n) k−1 = x̂k−1|k−1 − √ n ( P(1/2) k−1|k−1 ) i , i = 1, 2, ..., n Wi = 1 2n, i = 1, 2, ..., 2n n : nr. of points in discretized path x̂k|k−1 = 2n∑ i=1 f(X (i) k−1, v, ϕ̇, dt)Wi Pk|k−1 = 2n∑ i=1 f(X (i) k−1, v, ϕ̇, dt)− x̂k|k−1)(·)TWi + Qk−1 (4.8) 22 4. Methods yvk xvk path pi=(xi,yi) Figure 4.4: Update estimates to moved coordinate system in the nonlinear function f(x,v, ϕ̇,dt)in the prediction step yvk xvk interpolated points extrapolated point pi=(xi,yi) Δ Δ Δ Figure 4.5: interpolation/extrapolation to obtain prediction points in the nonlinear function f(x,v, ϕ̇,dt)in the prediction step 23 4. Methods 4.3.2 Update step Measurement model This filter is set up in a way that the measurements are the sampled path. Therefore the lane marking estimations from the lane detection camera must be translated to sampled path points. The lane marking measurements are in the form of zp = [∆yp, αp, cp 0, c p 1]T according to 4.1.1. The path zp can then be used to calculate the sampled measurement points for the filter. y =  1 ∆ 1 2∆2 1 6∆3 1 2∆ 1 2(2∆)2 1 6(2∆)3 ... ... ... ... 1 n∆ 1 2(n∆)2 1 6(n∆)3  zp (4.9) The update step is then a linear Kalman update (see equation 4.10) x̂k|k = x̂k|k−1 + Kk(yk −Hx̂k|k−1) H = In Pk|k = Pk|k−1 −KkSkKT k Kk = Pk|k−1 −HT S−1 k Sk = HPk|k−1HT + Rk Rk : Measurement noise covariance (4.10) 4.4 Generating a reference with preceding vehicle When a vehicle is driving in front of the Ego vehicle its position can be detected by the object detection sensors (radar and camera). These positions can be used as a reference for the lateral control of Ego vehicle. 4.4.1 Generating reference path A reference path can be constructed from the filtered position of the preceding vehicle. The path contains N number of points in the Ego vehicles coordinate system. The points are placed equidistantly in the x-axis of the Vcs with a fixed longitudinal distance deltaPrec between them. 24 4. Methods field of view field of viewpreceding vehicle preceding vehiclefiltered preceding vehicle position filtered preceding vehicle positionsampled lane marking path initial preceding vehicle path initial preceding vehicle pathinterpolation / extrapolation vehicle coordinate system vehicle coordinate system A) B) offset Figure 4.6: Initial path from preceding vehicle when a vehicle enters the field of view. A) shows how the N points in the path are generated if a lane marking path is available by offsetting the lane marking path and adding the preceding vehicle position as the Nth point and B) shows how the points are generated when no lane marking path is available by linear interpolation between the ego vehicle position and the preceding vehicle position. Initial path generated When there is no vehicle visible to the sensors no path is generated. As soon as a vehicle appears in the field of view of the Ego vehicle a path between the Ego vehicle and the preceding vehicle is generated depending if the lane markings path is available or not. The path is a series of N points in the vehicle coordinate system. If the lane markings are available the points are placed parallel to the lane markings with an offset. The offset is the lateral difference between the position of the preceding vehicle to the lane markings. The lane marking path consists of N-1 points in the vehicle coordinate system, therefore the Nth point in the preceding vehicle path is the position of the preceding vehicle. This method of generating the initial path can be seen in figure 4.6 A. If no lane marking path is available the initial points are placed by linear interpolating six points between the Ego vehicle and the preceding vehicle (figure 4.6 B). Movement of the path The points of the path represent the trajectory of the preceding vehicle in Ego vehicle coordinate system. Since the coordinate system moves with the Ego vehicle the points need to move within the coordinate system according to the distance traveled (dx) and change in orientation (dϕ) of the Ego vehicle. The distance and the orientation is calculated using the yaw rate (ϕ̇), the velocity measurement(v) and the step length(dt): dϕ = dt · ϕ̇ dx = dt · v 25 4. Methods Update points of the path When the distance between the point in path which is closest to the preceding vehicle and the current filtered point on the preceding vehicle is equal to deltaPrec, oldest point (furthest away from the preceding vehicle) is discarded and current filtered point added to the path. The update is illustrated in figure 4.7. preceding vehicle filtered position vehicle coordinate system A) B) preceding vehicle path new point on path point removed from path C) deltaPrec Figure 4.7: Updating the preceding vehicle path. Each time the distance between the newest point in the path and the preceding vehicle is larger than deltaPrec the point furthest away from the preceding vehicle is removed and the preceding vehicle position is added as new point. Uncertainties The movement of the points can be considered as pure prediction since there are no measurements of the points in the path to update with. The prediction step is the same as for the lane marking sampled points and therefore the same prediction model can be used. The covariance is based on the noise of the yaw rate and the velocity. Since there is no update step, the uncertainties increase over time, which results in very uncertain points on the path especially if the preceding vehicle is far ahead. By filtering the states of the vehicle such as vehicle and heading a more accurate movement of the vehicle between time instances can be achieved. This was not in the scope of this thesis but discussed in future work chapter 8. 4.5 Preceding Vehicle Filter The position of the preceding vehicle is measured with the camera and radar. The camera measures longitudinal and lateral distances to the preceding vehicle relative to the Ego vehicle. The radar has the same measurements as the camera but also outputs the relative velocity in longitudinal and lateral direction. These sensor infor- mation can be fused together to obtain an estimate of the position of the preceding vehicle relative to Ego vehicle. Based on the measurements of the preceding vehicle the most straight forward choice of a state vector would be [x, y, vx, vy] where x and y is the longitudinal and lateral distances respectively, vx and vy are the correspond- 26 4. Methods ing velocities in longitudinal and lateral direction. All of the states are relative to the Ego vehicle. 4.5.1 Prediction step When choosing a process model the changes between the measurements samples need to be considered. The measurements are relative to the Ego vehicle and the assumption can be made that the two vehicles travel on the same lane with similar velocity which makes the changes in velocity slow. Therefore a constant velocity (CV) model was chosen. Adding acceleration to the model would not be bring much improvements since the change in acceleration are very small. Since the CV process model is linear the optimal prediction method would be using the Kalman prediction. Since the measurements are not synchronous the step length dt might need to be adaptive, analysis on different step lengths can be seen in section 6.4. The Kalman prediction equations and a discretized CV model can be seen in equation4.11. x̂k|k−1 = Ax̂k−1|k−1 P̂k|k−1 = AP̂k−1|k−1AT + Qk where : A =  1 0 dt 0 0 1 0 dt 0 0 1 0 0 0 0 1  Qk =  dt3 3 0 dt2 2 0 0 dt3 3 0 dt2 2 dt2 2 0 dt 0 0 dt2 2 0 dt  (4.11) 4.5.2 Update step In the update step both the radar and the camera measure the x and y states. Different methods can be used to derive the estimate. The optimal method would be to update first using one sensor and then update with the second one. In equation 4.12 the radar and the camera update steps can be seen. 27 4. Methods RadarUpdate Sk = HPk|k−1HT + Rradar Kk = Pk|k−1 −HT S−1 k x̂Radar k|k = x̂k|k−1 + Kk(yk −Hx̂k|k−1) Pradar k|k = Pk|k−1 −KkSkKT k where : H = In Rradar = diag( [ σ2 LonP osRad σ2 LatP osRad σ2 LonV elRad σ2 LatV elRad ] ) CameraUpdate 2 Sk = HPradar k|k HT + Rcamera Kk = Pradar k|k −HT S−1 k x̂k|k = x̂Radar k|k + Kk(yk −Hx̂Radar k|k ) Pk|k = Pradar k|k −KkSkKT k where : H = [ 1 0 0 0 0 1 0 0 ] Rcamera = diag( [ σ2 LonP osCam σ2 LatP osCam ] ) (4.12) Since the radar and camera measurements are not synchronous there are few cases to consider dependent on which measurements are available when the filter is executed: Radar and camera measurements available The update step includes both updates according to equation 4.12. Only radar measurement available The update step runs update 1 from equation 4.12. Only camera measurement available The update step runs update 2 from equation 4.12. No measurements available Update step is skipped. 4.6 Control Signals The reference paths from the lane marking and preceding vehicle are represented in same way. The control signals can therefore be calculated in same way for both paths. The control signals are lateral error and heading error of the truck relative to the lane at a look ahead distance (see figure 4.8). The look ahead distance is 28 4. Methods dependent on the trucks velocity and a look ahead time (lookAheadDistance = velocity · lookAheadT ime), the look ahead time is set to 1 second. A decision has to be made which path to follow and that is done by using the uncertainty of the paths at the look ahead point. Lane Marking Path Calculate the control signal from the lane marking path. If the uncertainty of the lane marking gets larger than uncertainty of preceding vehicle path change to the state "Preceding Path". Preceding Path Calculate the control signal from the preceding vehicle path. If the uncertainty of the preceding vehicle path grows larger than uncertainty of lane marking path, change to the state "Lane Marking Path". −5 0 5 10 15 20 25 30 35 −1 0 1 2 3 4 5 6 lateral error look ahead dist x y heading error Figure 4.8: Lateral and heading error to the path obtained from the sampled filter estimates in the Vcs. Another approach could be to fuse the two paths together obtaining a single path that can be used as reference. This approach was not chosen since the lane marking path is in the middle of the lane but the preceding path is varied from the middle. This is because it is difficult to keep a constant lateral position in a lane while driving. This can be seen in figure 6.14. Also the uncertainty of the lane marking is much smaller than the uncertainty of the preceding vehicle path when lane marking measurements are available. In the case of coefficient filter the lateral error is calculated using the approximation in equation 4.2 and the heading error is then its derivative. The equations can be seen in equation 4.13. 29 4. Methods yref = ∆y + α · x+ c0 2 x 2 + c1 6 x 3 θref = α + c0 · x+ c1 2 x 2 (4.13) 4.7 Experimental Data In order to tune the weights and coefficients in the filter a good sets of measurement data is vital. The data was logged on the AstaZero test track using Volvo FH truck as a subject vehicle and Volvo V60 as a target vehicle. A map of AstaZero can be found in appendix A.1. 4.7.1 Setup The Ego vehicle contains the measurement computer and takes care of logging the data. The Vbox system in the target vehicle sends the Dgps data to the Vbox system in subject vehicle. The sensor information that is logged during the scenarios are listed in 4.1. Sensor information Truck (Subject) Car (Target) Velocity Dgps Yaw rate Dgps Lane Marking Camera objects Radar objects Table 4.1: The sensor information logged during the scenarios 4.7.2 Scenarios Description and purpose with different scenarios that were performed during logging are as follow: Highway driving Description: Driving on highway velocity (70 - 90 km/h) without any obstruc- tion blocking the lane markings from the camera. Purpose: The data will be used to tune the lane marking filters since the vision of the lane marking should be ideal. Highway driving with preceding vehicle Description: Driving on highway velocity with preceding vehicle visible to the object detection sensors. Both vehicle should be driving in same lane and the distance between them from 20 meters up to 50 meters. 30 4. Methods Purpose: The sensor information from radar and camera on lateral and lon- gitudinal distances of objects ahead will be used to develop filter that tracks the preceding vehicle. Traffic jam Description: The preceding vehicle will simulate a traffic jam by varying the velocity from standstill to normal highway driving. Purpose: Further develop and tune the prediction step of the filter as well as handling the transition between different sensor sources. When the car is driving less than 7 m/s the camera does not detect the lane marking which needs to be taken care of by using pure prediction and the detection of the preceding vehicle. When the car is at standstill it needs to make sure that the filter covariance matrix does not increase. One test run simulating highway driving is analyzed in more detail in section 4.8. Figure 4.9 (left) shows the trajectory of the truck during the measurement run. 4.8 Lane Marking Sensor Analysis In the lane marking filter explained in section 4.1 the lane marking measurements are used for the measurement updates. To do the measurement update the measurement noise covariance matrix R is needed. Unfortunately this information is not provided by the manufacturer of the lane marking detection camera. To obtain the covariance matrix R measured data is compared to ground truth to obtain the distribution of the sensor measurement error. This analysis was done for the sampled measurements as described in section 4.1.2. 4.8.1 Ground Truth To evaluate the measurements, the ground truth must be known. The data sets that are used to characterize the measurements were collected on the "rural road" on the AstaZero test track. The rural road in AstaZero is a closed loop with a length of around 5 km with turns in both directions. The tightest turn on the rural road is approximately 300 m in radius. The data sets were collected by driving around the rural road in counter clock wise direction. A map of AstaZero can be found in appendix A.1. Together with the lane markings the Gps position and heading of the Ego vehicle was collected during these measurements. With the Gps measurements of the lane markings can be mapped from the vehicle coordinate system (Vcs) to a map. In this project there were two possible maps, an existing AstaZero map or a map generated form the measured data. AstaZero Map At the Atr department at Volvo Gtt a detailed map with the measured positions of the lane markings at AstaZero is available. Unfortunately this map could not be used as ground truth in this project. The Gps measurement were collected without a base station for cm accuracy (see section 3.3). This means that the Gps measurements have an offset to the actual positions of 31 4. Methods the Ego vehicle. It was not possible to map the measured Gps data onto the existing map with a high enough accuracy. Generated from Measured Data The second possibility to get ground truth information is to extract it from measurements by combing Gps measurements and lane marking measure- ments. Even though the absolute Gps positions are not exact, the relative Gps position in one measurement run is more accurate. This is because the bias on the measurements primarily depends on atmospheric effects and sky blockage. These are slow changing effects and can therefore be assumed to be constant during the short time of one measurement run. The distance informa- tion from the lane marking measurement (∆y) was used together with the Gps measurements. For a description of the lane marking measurements see sec- tion 3.2. The Gps measurements are transposed to a East-North-Up (Enu) coordinate system and each lane marking measurement is transposed to its corresponding vehicle position and heading. This way the two lane markings can be mapped on an Enu coordinate system. The obtained lane markings positions are subject to both the Gps measurement noise (not the bias from the atmospheric effects) and the measurement noise of the (∆y) measurement of the lane marking camera. The used data set was collected under very good conditions for the lane marking detection camera i.e. dry roads and no sun- shine. The lane markings are filtered with a Kalman filter with a constant velocity model and smoothed with an Rts smoother. The path and cutout of the measured, filtered and smoothed lane marking can be seen in figure 4.9. 2000 2500 3000 1500 2000 2500 3000 3500 East [m] N or th [m ] measurements 2311.6 2311.7 2311.8 2311.9 2312 1705 1710 1715 1720 1725 1730 1735 East [m] N or th [m ] measurements filtered path smoothed Figure 4.9: (left) plot of the trajectory the vehicle drove during the measurement run in an Enu coordinate system. (right) Cutout of the ground truth of the lane markings generated with Gps data, lane marking measurements using a filter and and Rts-smoother. 4.8.2 Evaluating the Measurements For the evaluation each lane marking measurement was compared to the ground truth. The lane marking measurements are collected in the vehicle coordinate system (Vcs). For comparison the ground truth is transposed from Enu to the vehicle co- ordinate system using the vehicle position and heading (from Gps data) at each mea- surement instance. The measurements are sampled at x = {0m, 10m, 20m, 30m, 40m}. 32 4. Methods 0 5 10 15 20 25 30 35 40 −3 −2 −1 0 1 2 3 4 5 6 VCS x [m] V C S y [m ] left lane ground truth sampled right lane ground truth sampeled measurements sampeled Figure 4.10: Visualization of the ground truth of the lane markings and a corre- sponding measurement in the VCS. To calculate the error, the ground truth is interpolated to the same x positions as the measurement samples and subtracted from the measurement points. Figure 4.10 shows the ground truth lane markings and the measurements in the vehicle coordi- nate system. The error at the sampled x positions is analyzed for each lane marking measurement. The results for this can be found in section 6.3 4.9 Preceding Vehicle Sensor Analysis Initial intention was to use the relative distances between the vehicles measured with the Vbox system as ground truth for filter implementation. The Vbox system was not working as it should and the data was not accurate enough to use as ground truth so another approach was taken. 4.9.1 Generating ground truth using RTS smoother Since the actual ground truth of the preceding vehicle is unknown the best knowledge about it is from the measurements. To generate the ground truth the measurement are filtered using all the measurements samples available and have adaptive step length between each prediction. The moments are stored and the back smoothed using Rts smoother. The ground truth will then be used as reference when evalu- ating the step length used for implementation of the filter. The evaluation can be seen in section 6.4. 33 4. Methods 250 252 254 256 258 260 −0.8 −0.7 −0.6 −0.5 −0.4 −0.3 −0.2 −0.1 0 0.1 Time [s] D is ta nc e [m ] Lateral Position camera radar Estimates RTS Figure 4.11: Lateral position of the preceding vehicle. The red and green dots are the camera and radar lateral measurements respectively. The blue line is the estimated lateral position and the black line is the smoothed position or the Ground Truth. 34 5 Implementation 5.1 Simulink Implementation The algorithm proposed in this thesis is implemented in Simulink for running on the test truck at Volvo Group Advanced Technology & Research Atr department. Since the algorithm is developed in Matlab the Simulink implementation contains the main processing blocks of the algorithm as Matlab function blocks. Figure 5.1 shows the block diagram of the algorithm implemented in Simulink. measurement sampling preceding vehicle fusion filter lane marking filter preceding vehicle path generator reference error generator left lane meas. right lane meas. lane path meas. filtered lane path radar meas. camera meas. fused preceding vehicle position preceding vehicle path lateral error heading error Figure 5.1: Block diagram of the Simulink implementation of the algorithm. A detailed picture of the Simulink implementation can be found the appendix. measurement sampling The inputs to this block are the coefficient measurements ∆y, α, c0 and c1 (see equation (4.2) page 17) from the left and the right lane marking measurements. The block generates a sampled lane marking path from the measurements as described in sections 4.1.1 and 4.3.2. The output is a sampled lane marking path together with a confidence factor between 0 and 10. lane marking filter This block contains the sampled lane marking path filter that is described in section 4.3. The inputs are a sampled lane marking path measurement, a con- fidence factor between 0 and 10 and the velocity and yaw rate measurements from the proprioceptive sensors. The output is the filtered lane marking path together with its covariance matrix P. 35 5. Implementation preceding vehicle fusion This block contains the filter that fuses the preceding vehicle positions from the radar and the camera explained in section 4.5. The inputs are the lateral and longitudinal position of the preceding vehicle in the Vcs from the radar and camera as well as the lateral and longitudinal velocity from the radar. The output is the filtered position of the preceding vehicle. preceding vehicle path generator This block generates the preceding vehicle path using the preceding vehicle position and measurements from the proprioceptive sensors (see section 4.4). The inputs are the filtered preceding vehicle position, the lane marking path and the velocity and yaw rate measurements from the proprioceptive sensors. The output is the preceding vehicle path. reference error generator This block uses the path generated from the lane markings and the path generated from the preceding vehicle to calculate a lateral error and heading error signal for the lateral controller. 36 6 Results 6.1 Lane Marking Error Statistics This section presents a statistical analysis of the lane marking measurements. The left and the right lane marking measurements are analyzed separately. The data set used for this analysis is described in section 4.8. Since there was no dataset available that included lane marking measurements and ground truth, the ground truth was generated from the lane marking measurements combined with Gps data. 6.1.1 Absolute Error The sensor measurements are evaluated to determine the standard deviations that are used in the measurement covariance matrix in the implemented filter (see section 4.3. The measurement setup is described in detail in section 4.8. Figure 6.1 shows the absolute error between the lane marking measurements and the ground truth at different look ahead distances (x = {0, 10m, 20m, 30m, 40m}) as a function of time. It can be seen that the error has a time dependent bias which is noticeable at all look ahead distances except for 0m. The error and the time dependent bias are very similar for both the right and left lane marking. The third plot in figure 6.1 shows the curvature of the road corresponding to the measured lane markings. There is no visible correlation between the curvature and the error or the time dependent bias. Other possible causes for this bias could be the pitch or roll of the trucks cabin where the camera is mounted. Such correlations are not investigated in this project, but it can be subject of future work (see Chapter 8). 6.1.2 Error Distribution A statistical analysis of the error shows that the errors at the evaluated look ahead distances are zero mean and normally distributed. Figure 6.2 and figure 6.3 show the error distributions for the left lane marking measurements and right lane marking respectively. It should be noted that the axis for the histogram for the error at 0m (left most plot in figures 6.2 and 6.3) are different to the axis of the other histograms. The distribution for this error is much narrower than the error distributions at further look ahead distances. This can have two causes. Firstly the lane marking measurement at 0m only depends on the ∆y coefficient in the polynomial described in equation (3.2) on page 12 whereas the other measurements are calculated with all the coefficients. Secondly the measurements at 0m were used together with Gps data to generate the ground truth which is described in section 4.8. 37 6. Results 50 100 150 200 250 300 350 400 −1 0 1 2 time [s] le ft la ne e rr or [m ] 50 100 150 200 250 300 350 400 −1 0 1 2 time [s] rig ht la ne e rr or [m ] error at 40m error at 30m error at 20m error at 10m error at 0m 50 100 150 200 250 300 350 400 −5 0 5 x 10 −3 time [s] cu rv at ur e [1 /m ] Figure 6.1: Absolute error between lane marking measurements and ground truth at different look ahead distances for the left (top plot) and right (middle plot) lane marking during one measurement run. The curvature of the road (bottom plot) as reference to evaluate a correlation between road curvature and measurement error. −0.2 0 0.2 0 200 400 600 800 1000 error at 0m F re qu en cy −0.5 0 0.5 0 50 100 150 200 250 300 350 400 450 error at 10m −0.5 0 0.5 0 50 100 150 200 250 300 350 400 450 error at 20m −0.5 0 0.5 0 50 100 150 200 250 300 350 400 450 error at 30m −0.5 0 0.5 0 50 100 150 200 250 300 350 400 450 error at 40m Figure 6.2: Distribution of left lane marking measurement error. All the sets are normally distributed, the red curves are the Gaussian probability density functions with mean values and standard distributions found in table 6.1. 38 6. Results −0.2 0 0.2 0 200 400 600 800 1000 error at 0m F re qu en cy −0.5 0 0.5 0 50 100 150 200 250 300 350 400 450 error at 10m −0.5 0 0.5 0 50 100 150 200 250 300 350 400 450 error at 20m −0.5 0 0.5 0 50 100 150 200 250 300 350 400 450 error at 30m −0.5 0 0.5 0 50 100 150 200 250 300 350 400 450 error at 40m Figure 6.3: Distribution of right lane marking measurement error. All the sets are normally distributed, the red curves are the Gaussian probability density functions with mean values and standard distributions found in table 6.1. Dist: 0m 10m 20m 30m 40m error distribution left lane marking mean [m]: 0 0.0068 0.0105 0.0105 0.0057 std. Dev. [m]: 0.0103 0.0473 0.0949 0.1501 0.2197 error distribution right lanemarking mean[m]: -0.0001 0.0075 0.0125 0.0146 0.0128 std. Dev.[m]: 0.0104 0.0491 0.0981 0.1566 0.2309 error distribution combined mean[m]: 0 0.0072 0.0115 0.0126 0.0093 std. Dev.[m]: 0.0104 0.0482 0.0965 0.1534 0.2254 Table 6.1: Distributions of the lane marking measurement errors. Left and right lane separate and both lanes combined. 6.2 Lane Marking Filter Comparison In Schwartz paper [14], he states that filtering the coefficients is very sensitive to noise since the noise in one coefficient might propagate to other coefficients as they are dependent on each other. He states that a range and error filter as the sampled method explained in section 4.3 is more suitable for filtering. Both filters were implemented and compared although a statistical comparison was not made. It was clear from visualization that the sampled filter was superior to the coefficient filter. In figure 6.4 a comparison of the the prediction steps can be seen. This is done by simulating that the measurements are not available at certain interval and observing how close the prediction is to the "hidden" measurements. In figure 6.4 a) the measurement is available and the update step is executed but In figure 6.4 b), 39 6. Results c) and d) only the prediction step is executed. It can be seen from the plots that the coefficient filter moves away from the measurements while the sampled filter performs quite well. In figure 6.4 d) when the vehicle has driven one second (about 17 meters) without measurement the error is within 10 cm for the sampled filter but over 40 cm for the coefficient filter. −20 −10 0 10 20 30 40 50 −2 −1 0 1 2 3 x [m] y [m ] Time:207.8807 Measurement lane Sampled filter Coeff filter −20 −10 0 10 20 30 40 50 −2 −1 0 1 2 3 Time:208.2736 x [m] y [m ] −20 −10 0 10 20 30 40 50 −2 −1 0 1 2 3 Time:208.6697 x [m] y [m ] −20 −10 0 10 20 30 40 50 −2 −1 0 1 2 3 Time:208.9998 x [m] y [m ] Figure 6.4: The figures illustrate the performance of the prediction step in sample filter and coefficient filter. The gray lanes are the actual measurements, red line is the measurements of left and right lanes averaged, green circles are the estimates from the sampled filter and the blue line is the estimate from the coefficient filter. 6.3 Lane Marking Filter In this section the performance of the sampled lane marking filter described in section 4.3 is presented. The results are based on the same dataset that is used for the analysis of the sensor performance in section 6.1. It was collected under very good measurement conditions at AstaZero. An overview of the track that the vehicle drove while collecting the data set can be found in figure 4.9. In the following sections the performance of the filter is compared to the unprocessed measurements. First the filtered data is compared to the measurements as they were collected, then the performance is compared when noise is added to the measurements and finally the performance is compared when measurements are removed from the dataset. In section 6.3.1 the error of all the points are compared to the measurements and in sections 6.3.2 and 6.3.3 the lateral control signal is compared. The lateral control 40 6. Results signal (see section 4.6) is the lateral deviation at a look ahead distance of 1s (see section 4.6). This means that the look ahead distance depends on the velocity of the vehicle. 6.3.1 Performance with original sensor data Here the filter is compared to the measurements using the unaltered dataset. Figure 6.5 shows the error of the filtered lane marking path at different look ahead distances (x = 0m, 10m, 20m, 30m, 40m). It can be seen that the error distribution is very similar to the distribution for the unprocessed measurements seen in figure 6.6. The error signals are overall a bit smoother as can be seen in the zoomed sections of figures 6.5 and 6.6. The reason these two signals are so similar has two explanations. Firstly the measurements were collected in very good conditions for lane marking camera and have therefore little noise. Secondly the main contribution to the error is a time dependent bias that cannot be reduced by filtering. 0 1000 2000 3000 4000 5000 6000 −1 −0.5 0 0.5 1 1.5 samples er ro r [m ] Lateral error at different look ahead distances 380 390 400 410 420 430 440 −0.5 0 0.5 sample nr. [−] zoomed −0.2 0 0.2 0 500 1000 1500 F re qu en cy −0.5 0 0.5 0 500 1000 −0.5 0 0.5 0 500 1000 −0.5 0 0.5 0 500 1000 −0.5 0 0.5 0 500 1000 error [m] distribution of lateral error at different look ahead distances 40m 30m 20m 10m 0m Figure 6.5: Error to ground truth for the unprocessed lane marking path. (top left) error at look ahead distances x = 0m, 10m, 20m, 30m, 40m, (top right) zoomed part of error, (bottom plots) histograms for the distribution of the errors with colors corresponding to signals in the top plots. 41 6. Results 0 1000 2000 3000 4000 5000 6000 −1 −0.5 0 0.5 1 1.5 sample nr. [−] er ro r [m ] Lateral error at different look ahead distances 40m 30m 20m 10m 0m 380 390 400 410 420 430 440 −0.5 0 0.5 sample nr. [−] zoomed −0.2 0 0.2 0 500 1000 1500 F re qu en cy −0.5 0 0.5 0 500 1000 −0.5 0 0.5 0 500 1000 −0.5 0 0.5 0 500 1000 −0.5 0 0.5 0 500 1000 error [m] distribution of lateral error at different look ahead distances Figure 6.6: Error to ground truth for the filtered lane marking path. (top left) error at look ahead distances x = 0m, 10m, 20m, 30m, 40m, (top right) zoomed part of error, (bottom plots) histograms for the distribution of the errors with colors corresponding to signals in the top plots. The lateral control signal that is sent to the controller can be seen in figure 6.7. It both shows the control signal calculated from the filtered path and from the unprocessed measurements. The error is the difference between the lateral control signal and the control signal calculated from the ground truth. It also shows the probability density function of a fitted normal distribution for the error from the measurements and the error from the filtered path. Also in this analysis the filtered signal and the unprocessed measurements are very similar. 42 6. Results 0 500 1000 1500 2000 2500 3000 3500 4000 4500 5000 −1 0 1 2 3 sample nr. [m] di st an ce [m ] lateral control signal 0 1000 2000 3000 4000 5000 −0.6 −0.4 −0.2 0 0.2 0.4 error to ground truth sample nr. [m] di st an ce [m ] −0.2 −0.1 0 0.1 0.2 0.3 0 2 4 6 error [m] error PDF ground truth measurements filtered measurements filtered Figure 6.7: (top) lateral control signal generated from ground truth, unprocessed lane path measurements and filtered lane marking path, (bottom left) error to ground truth signal, (bottom right) probability density functions for errors. 6.3.2 Filter Performance With Added Noise In this test noise is added to all the measurements. The noise is added to all the coefficients of the lane marking measurements for the left and right lane (∆y, α, c0, c1 see equation (4.2)). The added noise is white with a Gaussian distribution. Figure 6.8 and figure 6.9 show the error on the lateral control signal from the filter esti- mates and the unprocessed measurements with added noise of 10% and 20% of the individual coefficients respectively. Table 6.2 shows the mean values and standard deviations for the error on the unprocessed measurements and the filter estimates for the unaltered dataset and the datasets with added noise. This shows that the implemented filter is robust towards measurement noise. 0 1000 2000 3000 4000 5000 −1 −0.5 0 0.5 error to ground truth sample nr. [m] di st an ce [m ] measurements filtered −0.2 −0.1 0 0.1 0.2 0.3 0 2 4 6 error PDF error [m] Figure 6.8: Error of the lateral control signal to ground truth for unprocessed measurement path and filtered path with added 10% noise to the measurements. 43 6. Results 0 1000 2000 3000 4000 5000 −1 −0.5 0 0.5 1 error to ground truth sample nr. [m] di st an ce [m ] measurements filtered −0.2 −0.1 0 0.1 0.2 0.3 0 2 4 6 error PDF error [m] Figure 6.9: Error of the lateral control signal to ground truth for unprocessed measurement path and filtered path with added 20% noise to the measurements. mean value [m] std. deviation [m] unaltered dataset unprocessed measurements: -0.0220 0.0784 filtered measurements: -0.0232 0.0814 dataset with 10 % added noise unprocessed measurements: -0.0226 0.1380 filtered measurements: -0.0240 0.0929 dataset with 20 % added noise unprocessed measurements: -0.0179 0.2341 filtered measurements: -0.0192 0.1138 Table 6.2: Mean values and standard deviations of the errors for the datasets with added noise. 6.3.3 Filter Performance Removed Measurements For this test measurements were randomly removed from the input dataset to sim- ulate the the sensor not detecting the lanes. Figures 6.10 and 6.11 show the result for when 20% of all measurements are removed from the dataset. For the analysis in figure 6.10 the lateral control signal of the unprocessed measurements is set to 0 when the measurements are removed. This is because the sensor outputs are 0 when no lanes are detected. 44 6. Results 0 1000 2000 3000 4000 5000 −1 0 1 2 error to ground truth sample nr. [m] di st an ce [m ] measurements filtered −0.2 −0.1 0 0.1 0.2 0.3 0 2 4 6 error PDF error [m] Figure 6.10: Error of the lateral control signal to ground truth for unprocessed measurement path and filtered path with 20 % of measurements removed from the path. When no measurement is available the lateral control signal for the unpro- cessed measurement path is set to 0. In the analysis in figure 6.11 the lateral control signal is kept constant when no lane measurements are detected. For this dataset it seems as if keeping the lateral control signal constant is equally good or even slightly better than using the filtered signals. This is because in the dataset that is used for this test the driver kept the vehicle very well in the center of the lane which makes the lateral control signal constant during a curve. Using this strategy in a real world implementation might prove to be dangerous as the system has no information about the position of the vehicle in the lane once the measurements are lost. 0 1000 2000 3000 4000 5000 −0.5 0 0.5 error to ground truth sample nr. [m] di st an ce [m ] measurements filtered −0.2 −0.1 0 0.1 0.2 0.3 0 2 4 6 error PDF error [m] Figure 6.11: Error of the lateral control signal to ground truth for unprocessed measurement path and filtered path with 20 % of measurements removed from the data set. When no measurement is available the lateral control signal for the unprocessed measurement path is kept constant at the last value. Figures 6.12 and 6.13 show the analysis for when 85% of the measurements are removed from the data set. In figure 6.12 the error signal is kept constant when in figure 6.13 the error signal is set to 0. A similar behavior can be seen as when 20% of the measurements were lost. The filter is still able to estimate the path quite well. A closed loop test would be needed in order to see if keeping the reference as constant would work as well. Table 6.3 shows the mean values and standard deviations of the error signals for the tests when the data set is unaltered, 20% of the measurements are lost and 80% of the measurements lost. 45 6. Results 0 1000 2000 3000 4000 5000 −1 −0.5 0 0.5 1 error to ground truth sample nr. [m] di st an ce [m ] measurements filtered −0.2 −0.1 0 0.1 0.2 0.3 0 2 4 6 error PDF error [m] Figure 6.12: Error of the lateral control signal to ground truth for unprocessed measurement path and filtered path with 85 % of measurements removed from the data set. When no measurement is available the lateral control signal for the unprocessed measurement path is kept constant at the last value. 0 1000 2000 3000 4000 5000 −1 0 1 2 3 error to ground truth sample nr. [m] di st an ce [m ] measurements filtered −0.2 −0.1 0 0.1 0.2 0.3 0 2 4 6 error PDF error [m] Figure 6.13: Error of the lateral control signal to ground truth for unprocessed measurement path and filtered path with 85 % of measurements removed from the data set. When no measurement is available the lateral control signal for the unprocessed measurement path is set to 0. 6.4 Preceding vehicle filter analysis The measurements of the object detection from radar and camera sensors are sam- pled asynchronous. In section 4.9 a description of how the measurements were filtered and smoothed to generate ground truth can be found. The ground truth is used as reference in order to determine the step length of the filter and if the mea- surement are to be treated as asynchronous or not. In table 6.4 the mean square error between the filter estimates and the ground truth can be seen using different step lengths (dt). Fixed dt means that the distribution is estimated with fixed in- terval using the latest measurements that have been sampled if any since the last update step. With varied dt the distribution is found at the correct time instance when the measurement is sampled but some measurements might be skipped if the dt is larger than the sampling frequency. Treating the data as asynchronous does produce less error than treating them synchronous which is expected but the dif- ference is quite small. The small difference can be explained by the fact that the Ego vehicle and the preceding vehicle travel a similar path at similar velocities and 46 6. Results mean value [m] std. deviation [m] unaltered dataset unprocessed measurement path: -0.0220 0.0784 filtered lane marking path: -0.0232 0.0814 dataset with 20 % removed measurements unprocessed measurement path: 0.0270 0.2071 filtered lane marking path: -0.0229 0.0818 meas. with lateral control kept const.: -0.0219 0.0789 dataset with 85 % removed measurements unprocessed measurements: 0.1681 0.3663 filtered measurements: -0.0174 0.1225 meas. with lateral control kept const.: -0.0207 0.0968 Table 6.3: Mean values and standard deviations of the errors for the data sets with removed samples. comparison between filtered lane marking path, unprocessed measurement path with control signal set to 0 when measurement is removed and unprocessed measurement path with control signal kept constant when measurement is removed. Mean square error dt [s] Fixed dt Varied dt 0.01 0.0027 0.0023 0.05 0.0040 0.0026 0.1 0.0043 0.0033 0.2 0.0044 0.0034 0.5 0.0046 0.0035 Table 6.4: Mean square error of the lateral position. The error is between the ground truth and filter estimates with different time step sizes (dt). therefore the change in measurements between time steps is small. When the path is generated from the preceding vehicle the points in the path move according to dead reckoning as explained in section 4.4.1. This produces uncertainty that is much larger than the uncertainties that come from treating the data as synchronous. 6.5 Preceding vehicle path A reference path was constructed from the filtered preceding vehicle position ac- cording to section 4.4.1. The coordinate transformation of the points that represent the path in Ego’s coordinate system is based on dead reckoning of the Ego vehicle. This results in increasing uncertainty as the points move away from the preceding ve- hicle. The path from the preceding vehicle can be seen in figure 6.14. The path from the preceding vehicle is not as smooth as the lane path since the preceding vehicle does not always drive in the middle of the road. It might be of interest to smoothen 47 6. Results the path before calculating the control signals but that was not investigated further in this thesis. −10 0 10 20 30 40 50 −3 −2 −1 0 1 2 3 Reference paths x [m] y [m ] Figure 6.14: The paths that are generated for the lateral control. The green path is the lane marking reference path, blue line is the preceding vehicle path, blue circle is the current estimated position of the preceding vehicle, red cross is the look ahead point and the gray lanes are the right and left lane markings. 6.6 Outlier rejection When the preceding vehicle goes out of sensors field of view the sensor outputs a constant value. In figure 6.15 the measurements from the radar and camera can be seen. As soon as the radar loses the object it outputs the measurement 102 with the sigma value 0. The filter would trust this measurement as the uncertainty is zero and set the estimate of the lateral distance close to 102 meters. By using outlier rejection which rejects the measurement if it is over 100 this behavior will be accounted for. 48 6. Results 0 100 200 300 400 500 0 50 100 Camera and radar measurements time [s] di st an ce [m ] Camera Radar 0 100 200 300 400 500 600 0 0.5 1 1.5 2 Standard deviation of lateral measurements time [s] st an da rd d ev ia tio n [σ 1] Camera Radar Figure 6.15: The radar and camera measurement of lateral position of the preced- ing vehicle. The standard deviation for the measurements that is estimated by the sensor can be seen in the lower plot. 49 6. Results 50 7 Conclusion and Discussion By using the implemented filters in this thesis two reference paths were generated for lateral control. The filters use sensor information from camera and radar to achieve robust lateral reference signal. This method is superior from using a raw measure- ment from a single source as a reference since it can handle noisy measurements and rely on prediction when measurements are lost. In order to have a reference path a lane markings or preceding vehicle need to be detected by the sensors. The scope of this thesis was to use sensors that are currently available in production vehicles at Volvo Group Trucks Technology. State of the art sensors that are more reliable and accurate were discarded since they are not yet feasible for commercially available vehicles. The solution proposed in this thesis is supposed to be used as a base platform for how sensor data are handled in several projects that are currently in process at the Atr department. 7.1 Implemented filters The filters were implemented for estimation of lane marking and position of preced- ing vehicle. Two filters were implemented and compared for the lane markings. The filter based on range and error (sampled) proved to perform much better than the coefficient filter as seen in section 6.2 so it was further developed. In the sampled lane marking filter the uncertainties of the camera measurements were quantified using methods described in section 4.8. The resulting uncertainties describe the dis- tribution of the measurements and are important to accurately set the measurement covariance in the filter. The filter is able to handle measurements that are affected by large noises as can be seen in section 6.3.2. The main advantages of the lane marking filter is that if the measurements are lost for short period of time the filter can still provide reference by using prediction of the path for up to 40 meters. This can be beneficial in scenarios where lane marking is partly obscured by e.g. snow or dirt. The second filter uses the object detection from the camera and radar sensors and estimates the position and velocity relative to the Ego vehicle. Although the outputs from the sensors are already processed data we treat them as raw data since we do not have access to the raw ones. 51 7. Conclusion and Discussion 7.2 Reference paths and error signals Two paths were generated for lateral reference. One path from the lane marking estimates and another from the estimated position of the preceding vehicle. Each path has a specific uncertainty throughout the path. The uncertainty in the lane path is usually much smaller than in the preceding vehicle path. That is mainly caused by the fact that we have measurements along the entire lane path but only at the end of the preceding vehicle path. The control signal is calcu