{"id":728,"date":"2021-09-27T23:58:11","date_gmt":"2021-09-27T23:58:11","guid":{"rendered":"http:\/\/avl.ucsd.edu\/?page_id=728"},"modified":"2022-09-28T00:05:30","modified_gmt":"2022-09-28T00:05:30","slug":"engage-with-avl","status":"publish","type":"page","link":"https:\/\/avl.ucsd.edu\/?page_id=728","title":{"rendered":"Engage with AVL"},"content":{"rendered":"\t\t<div data-elementor-type=\"wp-page\" data-elementor-id=\"728\" class=\"elementor elementor-728\">\n\t\t\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-619154c elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"619154c\" data-element_type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-f381cff\" data-id=\"f381cff\" data-element_type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-dd1f529 elementor-widget elementor-widget-heading\" data-id=\"dd1f529\" data-element_type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\">Open Positions<\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-5f9b225 elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"5f9b225\" data-element_type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-496bd20\" data-id=\"496bd20\" data-element_type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap\">\n\t\t\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-810f6c5 elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"810f6c5\" data-element_type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-e34852f\" data-id=\"e34852f\" data-element_type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-a258d18 elementor-widget elementor-widget-accordion\" data-id=\"a258d18\" data-element_type=\"widget\" data-widget_type=\"accordion.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t<div class=\"elementor-accordion\">\n\t\t\t\t\t\t\t<div class=\"elementor-accordion-item\">\n\t\t\t\t\t<div id=\"elementor-tab-title-1701\" class=\"elementor-tab-title\" data-tab=\"1\" role=\"button\" aria-controls=\"elementor-tab-content-1701\" aria-expanded=\"false\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t<span class=\"elementor-accordion-icon elementor-accordion-icon-left\" aria-hidden=\"true\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<span class=\"elementor-accordion-icon-closed\"><i class=\"fas fa-plus\"><\/i><\/span>\n\t\t\t\t\t\t\t\t<span class=\"elementor-accordion-icon-opened\"><i class=\"fas fa-minus\"><\/i><\/span>\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t<\/span>\n\t\t\t\t\t\t\t\t\t\t\t\t<a class=\"elementor-accordion-title\" tabindex=\"0\">Robust Detection<\/a>\n\t\t\t\t\t<\/div>\n\t\t\t\t\t<div id=\"elementor-tab-content-1701\" class=\"elementor-tab-content elementor-clearfix\" data-tab=\"1\" role=\"region\" aria-labelledby=\"elementor-tab-title-1701\"><p><b>Background<\/b><\/p><p><span style=\"font-weight: 400;\">Reliable detection and state estimation of road users (cars, bikes, pedestrians, \u2026)\u00a0 is critical to navigation in urban driving environments. In this project, our team will extend the autonomy pipeline for detection and explore multi-sensory methods for 3D object detection using sensing modalities such as vision and range sensors. <\/span><\/p><p><b>Project Objectives\u00a0<\/b><\/p><p><span style=\"font-weight: 400; color: #000000;\">The objectives of this project are as follows:<\/span><\/p><ol><li style=\"font-weight: 400;\" aria-level=\"3\"><span style=\"font-weight: 400; color: #000000;\">Evaluate various state-of-the-art strategies for 3D object detection using radar and camera, etc.<\/span><\/li><li style=\"font-weight: 400;\" aria-level=\"3\"><span style=\"font-weight: 400; color: #000000;\">Evaluate real-time capabilities of proposed method<\/span><\/li><li style=\"font-weight: 400;\" aria-level=\"3\"><span style=\"font-weight: 400; color: #000000;\">Incorporate model into a full autonomy stack<\/span><\/li><li style=\"font-weight: 400;\" aria-level=\"3\"><span style=\"font-weight: 400; color: #000000;\">Evaluate method on real-time and full scale system<\/span><\/li><\/ol><p><b>Preferred Skills<\/b><\/p><p><span style=\"font-weight: 400;\">ROS, Python, C++, Deep Learning\/ML (3D object detection), 3D computer vision, experience with LIDAR<\/span><\/p><p><b>Project Timeline: <span style=\"font-weight: 400;\">at least 2 quarters<\/span><\/b><\/p><p>\u00a0<\/p><\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t\t\t<div class=\"elementor-accordion-item\">\n\t\t\t\t\t<div id=\"elementor-tab-title-1702\" class=\"elementor-tab-title\" data-tab=\"2\" role=\"button\" aria-controls=\"elementor-tab-content-1702\" aria-expanded=\"false\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t<span class=\"elementor-accordion-icon elementor-accordion-icon-left\" aria-hidden=\"true\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<span class=\"elementor-accordion-icon-closed\"><i class=\"fas fa-plus\"><\/i><\/span>\n\t\t\t\t\t\t\t\t<span class=\"elementor-accordion-icon-opened\"><i class=\"fas fa-minus\"><\/i><\/span>\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t<\/span>\n\t\t\t\t\t\t\t\t\t\t\t\t<a class=\"elementor-accordion-title\" tabindex=\"0\">Intent Recognition \/ Prediction<\/a>\n\t\t\t\t\t<\/div>\n\t\t\t\t\t<div id=\"elementor-tab-content-1702\" class=\"elementor-tab-content elementor-clearfix\" data-tab=\"2\" role=\"region\" aria-labelledby=\"elementor-tab-title-1702\"><p><b>Background<\/b><\/p><p><span style=\"font-weight: 400;\">Understanding the future states (i.e. 3-10 seconds) of road participants plays an important role in decision-making and navigation. In this project, several long term forecasting and intent recognition strategies will be considered and incorporated into our autonomy stack.<\/span><\/p><p><b>Project Objectives\u00a0<\/b><\/p><p><span style=\"font-weight: 400; color: #000000;\">The objectives of this project are as follows:<\/span><\/p><ol><li style=\"font-weight: 400;\" aria-level=\"3\"><span style=\"font-weight: 400; color: #000000;\">Evaluate state of the art strategies for prediction<\/span><\/li><li style=\"font-weight: 400;\" aria-level=\"3\"><span style=\"font-weight: 400; color: #000000;\">Evaluate real-time capabilities of proposed method<\/span><\/li><li style=\"font-weight: 400;\" aria-level=\"3\"><span style=\"font-weight: 400; color: #000000;\">Incorporate model into a full autonomy stack<\/span><\/li><li style=\"font-weight: 400;\" aria-level=\"3\"><span style=\"font-weight: 400; color: #000000;\">Evaluate method on real-time and full-scale system<\/span><\/li><\/ol><p><b>Preferred Skills<\/b><\/p><p><span style=\"font-weight: 400;\">ROS, Python, C++, ML (prediction), probabilistic state estimation and tracking<\/span><\/p><p><b>Project Timeline: <span style=\"font-weight: 400;\">at least 2 quarters<\/span><\/b><\/p><\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t\t\t<div class=\"elementor-accordion-item\">\n\t\t\t\t\t<div id=\"elementor-tab-title-1703\" class=\"elementor-tab-title\" data-tab=\"3\" role=\"button\" aria-controls=\"elementor-tab-content-1703\" aria-expanded=\"false\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t<span class=\"elementor-accordion-icon elementor-accordion-icon-left\" aria-hidden=\"true\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<span class=\"elementor-accordion-icon-closed\"><i class=\"fas fa-plus\"><\/i><\/span>\n\t\t\t\t\t\t\t\t<span class=\"elementor-accordion-icon-opened\"><i class=\"fas fa-minus\"><\/i><\/span>\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t<\/span>\n\t\t\t\t\t\t\t\t\t\t\t\t<a class=\"elementor-accordion-title\" tabindex=\"0\">Behavioral and Motion Planning<\/a>\n\t\t\t\t\t<\/div>\n\t\t\t\t\t<div id=\"elementor-tab-content-1703\" class=\"elementor-tab-content elementor-clearfix\" data-tab=\"3\" role=\"region\" aria-labelledby=\"elementor-tab-title-1703\"><p><b>Background<\/b><\/p><p><span style=\"font-weight: 400;\">Decision making entails defining a sequence of actions given spatiotemporal information about surrounding agents and obstacles provided by a perception stack. In this project, appropriate behaviors and decision-making strategies will be incorporated into our autonomy stack. Early testing and validation will be performed in simulation; deployment will be performed on full-scale vehicles in an urban setting.<\/span><\/p><p><b>Project Objectives\u00a0<\/b><\/p><p><span style=\"font-weight: 400; color: #000000;\">The objectives of this project are as follows:<\/span><\/p><ol><li style=\"font-weight: 400;\" aria-level=\"3\"><span style=\"font-weight: 400; color: #000000;\">Implement a method for dynamic obstacle avoidance using learning based strategies<\/span><\/li><li style=\"font-weight: 400;\" aria-level=\"3\"><span style=\"font-weight: 400; color: #000000;\">Benchmark regarding existing obstacle avoidance method<\/span><\/li><li style=\"font-weight: 400;\" aria-level=\"3\"><span style=\"font-weight: 400; color: #000000;\">Evaluate real-time capabilities of proposed method<\/span><\/li><li style=\"font-weight: 400;\" aria-level=\"3\"><span style=\"font-weight: 400; color: #000000;\">Incorporate model into a full autonomy stack<\/span><\/li><li style=\"font-weight: 400;\" aria-level=\"3\"><span style=\"font-weight: 400; color: #000000;\">Evaluate methods for real-time and full-scale system<\/span><\/li><\/ol><p><b>Preferred Skills<\/b><\/p><p><span style=\"font-weight: 400;\">ROS, Python, C++, ML (imitation learning), graphical models (factor graphs).<\/span><\/p><p><b>Project Timeline: <span style=\"font-weight: 400;\">at least 2 quarters<\/span><\/b><\/p><\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t\t\t<div class=\"elementor-accordion-item\">\n\t\t\t\t\t<div id=\"elementor-tab-title-1704\" class=\"elementor-tab-title\" data-tab=\"4\" role=\"button\" aria-controls=\"elementor-tab-content-1704\" aria-expanded=\"false\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t<span class=\"elementor-accordion-icon elementor-accordion-icon-left\" aria-hidden=\"true\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<span class=\"elementor-accordion-icon-closed\"><i class=\"fas fa-plus\"><\/i><\/span>\n\t\t\t\t\t\t\t\t<span class=\"elementor-accordion-icon-opened\"><i class=\"fas fa-minus\"><\/i><\/span>\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t<\/span>\n\t\t\t\t\t\t\t\t\t\t\t\t<a class=\"elementor-accordion-title\" tabindex=\"0\">Monocular Depth Estimation<\/a>\n\t\t\t\t\t<\/div>\n\t\t\t\t\t<div id=\"elementor-tab-content-1704\" class=\"elementor-tab-content elementor-clearfix\" data-tab=\"4\" role=\"region\" aria-labelledby=\"elementor-tab-title-1704\"><p><b>Background<\/b><\/p><p><span style=\"font-weight: 400;\">Scene understanding and state estimation of various agents depends on an unified representation of the scene. To facilitate this process, depth estimation and coordinate transformations from a sensor centric to an egocentric perspective are of relevance. In this project, several strategies will be considered for monocular depth estimation using image data and other sensing modalities to supervise the depth estimation process.<\/span><\/p><p><b>Project Objectives\u00a0<\/b><\/p><p><span style=\"font-weight: 400; color: #000000;\">The objectives of this project are as follows:<\/span><\/p><ol><li style=\"font-weight: 400;\" aria-level=\"3\"><span style=\"font-weight: 400; color: #000000;\">Evaluate various state of the art strategies for monocular depth estimation and improve on existing techniques<\/span><\/li><li style=\"font-weight: 400;\" aria-level=\"3\"><span style=\"font-weight: 400; color: #000000;\">Evaluate real-time capabilities and practicality of proposed method<\/span><\/li><\/ol><p><b>Preferred Skills<\/b><\/p><p><span style=\"font-weight: 400;\">Python, C++, Deep Learning\/ML (computer vision), structure from motion, multi-view geometry<\/span><\/p><p><b>Project Timeline: <span style=\"font-weight: 400;\">1-2 quarters<\/span><\/b><\/p><\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-c32ce8a elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"c32ce8a\" data-element_type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-4ba8b4c\" data-id=\"4ba8b4c\" data-element_type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-d66da2b elementor-widget elementor-widget-html\" data-id=\"d66da2b\" data-element_type=\"widget\" data-widget_type=\"html.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<div id=\"erf_form_container_750\" class=\"erf-container erf-contact erf-label-top erf-layout-one-column erf-style-rounded-corner\">\n        \n    \n        Form submission limit reached.<br> \n            \n<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<\/div>\n\t\t","protected":false},"excerpt":{"rendered":"<p>Open Positions Robust Detection Background Reliable detection and state estimation of road users (cars, bikes, pedestrians, \u2026)\u00a0 is critical to navigation in urban driving environments. In this project, our team will extend the autonomy pipeline for detection and explore multi-sensory methods for 3D object detection using sensing modalities such as vision and range sensors. Project&#8230;<\/p>\n","protected":false},"author":2,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-728","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/avl.ucsd.edu\/index.php?rest_route=\/wp\/v2\/pages\/728","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/avl.ucsd.edu\/index.php?rest_route=\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/avl.ucsd.edu\/index.php?rest_route=\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/avl.ucsd.edu\/index.php?rest_route=\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/avl.ucsd.edu\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=728"}],"version-history":[{"count":35,"href":"https:\/\/avl.ucsd.edu\/index.php?rest_route=\/wp\/v2\/pages\/728\/revisions"}],"predecessor-version":[{"id":1026,"href":"https:\/\/avl.ucsd.edu\/index.php?rest_route=\/wp\/v2\/pages\/728\/revisions\/1026"}],"wp:attachment":[{"href":"https:\/\/avl.ucsd.edu\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=728"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}