During the early mail delivery deployment missions, AVL logged various vehicle control and state signals to characterize overall system performance and robustness. This work has been submitted for publication and showcases appropriate metrics for benchmarking autonomous vehicles. We expect that the tools will raise awareness on the performance of state-of-the-art autonomous vehicle technology in order to better understand the shortcomings of today’s technology and collectively design better performing systems.
To achieve autonomy for self-driving vehicles, it is essential to understand the intent of other road-users operating in proximity to the vehicle, such as passing through an intersection, queuing, and navigating crosswalks. While many methods have been proposed for detection of cars and pedestrians and even tracking over time, less effort has been devoted to the recognition of the intent of other road-users. Will the pedestrian cross the road? Will the driver stop at the intersection or turn left? These intricacies are the type of problems that the AVL at UC San Diego is actively exploring and addressing.
While High-Definition (HD) and dense point cloud maps considerably facilitate navigation and path tracking of autonomous cars, these maps often hinder generalization and scalability. To address these constraints, AVL is currently exploring alternative architectures that leverage semantic information extracted in real-time to characterize environments dynamically.
Based on aerial imagery, elevation data, and Point Cloud map of UCSD campus, AVL is currently working on building a simulation environment that incorporates 3D scenes for simulating and testing automated driving systems.