Our team has been active in a wide range of different research projects including autonomous vehicle benchmarking, dynamic scene modeling, road user trajectory forecasting, simulation environments for autonomous driving and automatic calibration. At the moment, we have open positions for undergraduate and graduate students in the areas of simulation, real-time dynamic planning, hardware synchronization of sensors, edge computing, and robust sensor fusion using radars, lidars and vision. To learn more or engage in our research, feel free to contact us.
Reliable detection and state estimation of road users (cars, bikes, pedestrians, …) is critical to navigation in urban driving environments. In this project, our team will extend the autonomy pipeline for detection and explore multi-sensory methods for 3D object detection using multiple sensing modalities such as vision and range sensors. State-of-the-art methods trained on large scale datasets are often subject to overfitting to specific sensor types and configurations. Since the sensors we use are different to the sensors present on autonomous driving datasets(For example, we also have solid state LiDARs), generalization to our platform is usually poor and noisy. Therefore in this project our team also aims to hypothesize and implement strategies to bridge these domain gaps with little to no labeled data on our setup.
Tracking of surrounding objects is an integral part of the perception module of Autonomous Driving. It takes potentially noisy information across time to form a consistent representation for downstream tasks such as trajectory prediction and planning. A tracker needs to be capable of handling high recall detection with noisy confidence from learning based detectors and detections from multiple sensors to form a coherent temporal trajectory. Additionally, urban environments and dense traffic pose challenges due to occlusion. Various road users including pedestrains (watching cell phone), cyclist, scooters, skateboarders demonstrate different motion patterns that require better dynamic models.
Automated vehicles navigating in highly dynamic and dense urban environments must produce real-time plans given upstream information abstracting the surrounding environment. Classic algorithms usually fail to adapt to a variable number of agents, especially pedestrians, in the dynamic scenes, while SOTA end-to-end learning algorithms are hard to be deployed due to limited on-board computation resources. Additionally, robust incorporation of motion prediction information into the planning module remains an open problem. This project will target the limitations listed above and aim at developing a real-time planning system that can robustly operate in dynamic campus environments.
In the past decade we have seen a tremendous growth of interest in autonomous vehicles. Many companies have been conducting research and testing on the streets such as Waymo, Cruise, and Tesla. These companies tend to focus on traditional passenger vehicles and have achieved good results in structured environments such as highway driving. However, autonomous driving in dense urban areas remains a challenge that has yet to be solved. The goal of this project is to focus on dense urban settings (such as campus) and rather than using a car we have elected to automate an E-Scooter. By simply walking around campus and other parts of San Diego you can see that these electric scooters are littered everywhere. However, these scooters are often only used a couple times per day before being left on the ground at which point they must be picked up by a company representative and placed back in their original locations. Imagine now if instead, the scooter was capable of being summoned to your location via app, driving itself back after dropping you off, and even able to drive itself to charging stations when necessary!
While High-Definition (HD) and dense point cloud maps considerably facilitate navigation and path tracking of autonomous cars, these maps often hinder generalization and scalability. To address these constraints, AVL is currently exploring alternative architectures that leverage semantic information extracted in real-time to characterize environments dynamically.
To achieve autonomy for self-driving vehicles, it is essential to understand the intent of other road-users operating in proximity to the vehicle, such as passing through an intersection, queuing, and navigating crosswalks. While many methods have been proposed for detection of cars and pedestrians and even tracking over time, less effort has been devoted to the recognition of the intent of other road-users. Will the pedestrian cross the road? Will the driver stop at the intersection or turn left? These intricacies are the type of problems that the AVL at UC San Diego is actively exploring and addressing.
Based on aerial imagery, elevation data, and point cloud maps of the UCSD campus, AVL is currently working on building a simulation environment that incorporates 3D scenes for simulating and testing automated driving systems.
Motivated by recent developments in semantic scene modeling, we are exploring dynamic methods for trajectory generation to address the scalability constraints of existing HD maps. Our approach aims to tackle intersection navigation and settings in which multi-modal trajectory generation is required. Learn more about the approach and our data.
During the early mail delivery deployment missions, AVL logged various vehicle control and state signals to characterize overall system performance and robustness. This work has been submitted for publication and showcases appropriate metrics for benchmarking autonomous vehicles. We expect that the tools will raise awareness on the performance of state-of-the-art autonomous vehicle technology in order to better understand the shortcomings of today’s technology and collectively design better performing systems.
For intelligent vehicle applications, calibration is often an important component of sensor fusion, depth estimation and scene understanding. However, in many scenarios, the estimated calibration parameters can change over time as the result of temperature and vibrations. For this reason, we are actively developing tools and methods that leverage road furniture and geometric shapes such as stop signs to dynamically calibrate our cameras on board of our vehicles in real-time. Learn more about the approach and our data.
© 2020 Autonomous Vehicle Laboratory