Sensor Data Validation and Driving Safety in Autonomous Driving Systems
自動駕駛系統中的傳感器數據校驗與駕駛安全
Student thesis: Doctoral Thesis
Author(s)
Related Research Unit(s)
Detail(s)
Awarding Institution | |
---|---|
Supervisors/Advisors |
|
Award date | 18 Jan 2022 |
Link(s)
Permanent Link | https://scholars.cityu.edu.hk/en/theses/theses(2399f7f9-91e8-42d4-9d30-1282fd65d2f1).html |
---|---|
Other link(s) | Links |
Abstract
Autonomous driving technology has drawn a lot of attention due to its fast development and extremely high commercial values. The recent technological leap of autonomous driving can be primarily attributed to the progress in the environment perception. Good environment perception provides accurate high-level environment information which is essential for autonomous vehicles to make safe and precise driving decisions and strategies. Moreover, such progress in accurate environment perception would not be possible without deep learning models and advanced onboard sensors, such as optical sensors (LiDARs and cameras), radars, GPS. However, the advanced sensors and deep learning models are prone to recently invented attack methods. For example, LiDARs and cameras can be compromised by optical attacks, and deep learning models can be attacked by adversarial examples. The attacks on advanced sensors and deep learning models can largely impact the accuracy of the environment perception, posing great threats to the safety and security of autonomous vehicles. In this thesis, we study the detection methods against the attacks on onboard sensors and the linkage between attacked deep learning models and driving safety for autonomous vehicles.
To detect the attacks, redundant data sources can be exploited, since information distortions caused by attacks in victim sensor data result in inconsistency with the information from other redundant sources. To study the linkage between attacked deep learning models and driving safety, the evaluation of the impact of attacks on driving safety in an end-to-end fashion is the key. Thus, we can leverage the data from different onboard sensors to detect attacks for single autonomous vehicle platforms. And we can use sensor data from multiple neighboring vehicles to achieve the attack detection for multiple connected autonomous vehicles. Furthermore, we can implement an end-to-end driving safety evaluation framework to help assess the attack impact on driving safety.
In this thesis, we first develop a data validation framework to detect and identify optical attacks against LiDARs and cameras for single autonomous vehicles. The greatest challenge lies in finding a type of redundant information which can be observed in both LiDAR point clouds and camera images. We tackle the challenge by leveraging depth information as the redundancy. Our main idea is to (1) use data from three sensors to obtain two versions of depth maps (i.e., disparity) and (2) detect attacks by analyzing the distribution of disparity errors. Based on the detection scheme, we further develop an identification model that is capable of identifying up to n-2 attacked sensors in a system with one LiDAR and n cameras. We prove the correctness of our identification scheme and conduct experiments to show the accuracy of our identification method.
Second, as the countermeasures designed for single vehicles take no advantage of multiple connected vehicles, simply deploying them in a collaborative autonomous driving system does not produce more security bonus. To this end, we propose a new data validation method by leveraging data sources from multiple neighboring vehicular nodes to detect the optical attacks against LiDARs. The first challenge of designing the method is that no mobile network can bare the burden of transmitting all point clouds among connected autonomous vehicles, leading to limited size of data for validation, while the second challenge is that the scans of objects in point clouds are usually severely incomplete at the unlit side, causing barriers to accurate validation. To overcome the first challenge, we leverage a region proposal network to produce proposals as validation regions and propose to only transmit the scans within them, which heavily scales down the size of data for transmission and never overlooks potential attacks. We tackle the second challenge though concatenating the original scan of objects with a symmetrical copy of it to fill in the incomplete part. We perform preliminary experiments to examine our method. And the results show that our data validation method for multiple connected vehicles detects the attacks effectively with a fair accuracy.
Third, previous studies demonstrated that adversarial examples can hugely impact deep learning models for environment perception, and inaccurate perception results with no doubt may jeopardize the driving safety of autonomous vehicles. However, driving safety is a combined result of many factors, and the weakened model performance does not necessarily result in safety dangers. The linkage between the performance of a deep learning model under adversarial attacks and driving safety is still under-explored. To study such linkage and evaluate the impact of adversarial examples on driving safety in an end-to-end fashion, we propose an end-to-end driving safety evaluation framework with a set of driving safety performance metrics. With the framework, we investigate the impact of two primary types of adversarial attacks, perturbation attacks and patch attacks, on the driving safety rather than only on the perception precision of deep learning models. In particular, we consider two state-of-the-art models in vision-based 3D object detection, Stereo R-CNN and DSGN. By analyzing the results of our extensive evaluation experiments, we find that the attack's impact on the driving safety of autonomous vehicles and the attack's impact on the precision of 3D object detectors are decoupled. In addition, we further investigate the causes behind the finding with an ablation study. Our finding provides a new perspective to evaluate adversarial attacks and guide the selection of deep learning models in autonomous driving.
To briefly summarize, in this thesis, we first propose a framework to detect optical attacks and identify the attacked sensors for single autonomous vehicles. Then, we propose a data validation method to detect the optical attacks against LiDARs using point clouds from multiple connected vehicle sources. At last, we propose an end-to-end driving safety evaluation framework to investigate the impact of adversarial attacks on the driving safety of autonomous vehicles. All the presented research in this thesis would greatly advance the safety and security of autonomous driving technology and eventually benefit our future life.
To detect the attacks, redundant data sources can be exploited, since information distortions caused by attacks in victim sensor data result in inconsistency with the information from other redundant sources. To study the linkage between attacked deep learning models and driving safety, the evaluation of the impact of attacks on driving safety in an end-to-end fashion is the key. Thus, we can leverage the data from different onboard sensors to detect attacks for single autonomous vehicle platforms. And we can use sensor data from multiple neighboring vehicles to achieve the attack detection for multiple connected autonomous vehicles. Furthermore, we can implement an end-to-end driving safety evaluation framework to help assess the attack impact on driving safety.
In this thesis, we first develop a data validation framework to detect and identify optical attacks against LiDARs and cameras for single autonomous vehicles. The greatest challenge lies in finding a type of redundant information which can be observed in both LiDAR point clouds and camera images. We tackle the challenge by leveraging depth information as the redundancy. Our main idea is to (1) use data from three sensors to obtain two versions of depth maps (i.e., disparity) and (2) detect attacks by analyzing the distribution of disparity errors. Based on the detection scheme, we further develop an identification model that is capable of identifying up to n-2 attacked sensors in a system with one LiDAR and n cameras. We prove the correctness of our identification scheme and conduct experiments to show the accuracy of our identification method.
Second, as the countermeasures designed for single vehicles take no advantage of multiple connected vehicles, simply deploying them in a collaborative autonomous driving system does not produce more security bonus. To this end, we propose a new data validation method by leveraging data sources from multiple neighboring vehicular nodes to detect the optical attacks against LiDARs. The first challenge of designing the method is that no mobile network can bare the burden of transmitting all point clouds among connected autonomous vehicles, leading to limited size of data for validation, while the second challenge is that the scans of objects in point clouds are usually severely incomplete at the unlit side, causing barriers to accurate validation. To overcome the first challenge, we leverage a region proposal network to produce proposals as validation regions and propose to only transmit the scans within them, which heavily scales down the size of data for transmission and never overlooks potential attacks. We tackle the second challenge though concatenating the original scan of objects with a symmetrical copy of it to fill in the incomplete part. We perform preliminary experiments to examine our method. And the results show that our data validation method for multiple connected vehicles detects the attacks effectively with a fair accuracy.
Third, previous studies demonstrated that adversarial examples can hugely impact deep learning models for environment perception, and inaccurate perception results with no doubt may jeopardize the driving safety of autonomous vehicles. However, driving safety is a combined result of many factors, and the weakened model performance does not necessarily result in safety dangers. The linkage between the performance of a deep learning model under adversarial attacks and driving safety is still under-explored. To study such linkage and evaluate the impact of adversarial examples on driving safety in an end-to-end fashion, we propose an end-to-end driving safety evaluation framework with a set of driving safety performance metrics. With the framework, we investigate the impact of two primary types of adversarial attacks, perturbation attacks and patch attacks, on the driving safety rather than only on the perception precision of deep learning models. In particular, we consider two state-of-the-art models in vision-based 3D object detection, Stereo R-CNN and DSGN. By analyzing the results of our extensive evaluation experiments, we find that the attack's impact on the driving safety of autonomous vehicles and the attack's impact on the precision of 3D object detectors are decoupled. In addition, we further investigate the causes behind the finding with an ablation study. Our finding provides a new perspective to evaluate adversarial attacks and guide the selection of deep learning models in autonomous driving.
To briefly summarize, in this thesis, we first propose a framework to detect optical attacks and identify the attacked sensors for single autonomous vehicles. Then, we propose a data validation method to detect the optical attacks against LiDARs using point clouds from multiple connected vehicle sources. At last, we propose an end-to-end driving safety evaluation framework to investigate the impact of adversarial attacks on the driving safety of autonomous vehicles. All the presented research in this thesis would greatly advance the safety and security of autonomous driving technology and eventually benefit our future life.