Background
Autonomous navigation relies on making sense of the environment. Such an environment can be captured by a semantic map, a map that not only captures geometrical information but also contextual information. In this project, we will build on prior work [1] and explore symbolic representations. The symbolic representation will leverage compact geometric primitives and priors for dynamic scene modeling. The transformation of the semantic map will enable robust navigation at a larger scale.
Project Objectives
The objectives of this project are as follows:
Using a camera-LIDAR sensor suite, identify robust strategies for online feature extraction, including lane markings, crosswalks, traffic cones, etc.
Give extracted features, perform necessary transformations and fuse 2D/3D information.
Explore symbolic representations to encode lane boundaries, sidewalks, etc. that can facilitate urban navigation applications.
Students will have the option to engage in subsequent navigation projects, submit work for review at major robotics conferences/journals, and verify functionality on a full-scale vehicle.
Preferred Skills
ROS, Python, C++, probability, Deep Learning/ML models (semantic segmentation, object detection, lane detection), 3D computer vision, experience with LIDAR and GNSS technology.
Useful References
Paz, D.”, Zhang, H.*, Li, Q.*, Xiang, H.*, and Christensen, H.. Probabilistic Semantic Mapping for Urban Autonomous Driving Applications. In Intelligent Robots and Systems (Las Vegas, US, October 2020) http://ras.papercept.net/images/temp/IROS/files/2186.pdf
Background
Supervised learning models for perception tasks often rely on large-scale manually labeled datasets. Such labeling efforts are often infeasible due to the cost associated with large scale annotation. Additionally, if a new sensor is used or the configuration is altered, new labeled data may be needed. In this project, methods for automatic and semi-automatic labeling will be explored to address these challenges. The methods developed will be applied for point cloud and image labeling to provide labels for learning based developments.
Project Objectives
The objectives of this project are as follows:
Design and implement auto/semi-auto labeling algorithms for pedestrians and vehicles from camera and LiDAR data.
Having Fun playing with multi-sensor data in the real-world.
Preferred Skills
ROS, Python, C++, PCL, Deep Learning/ML models (object detection), tracking
© 2020 Autonomous Vehicle Laboratory