User Tools

Site Tools


projects:year7:7a.035.uva

7a.035.UVA - Improved Decision Making for Autonomous Systems

Project - Summary

The success of autonomous navigation in other modes of travel has led to its advent in the maritime domain, with the development of Autonomous Surface Vessels (ASVs). However, like all other autonomous vehicles, ASVs also come with their safety and reliability concerns. These autonomous vessels are expected to navigate safely through crowded environments, such as harbors, that involve complex social interactions. Safe navigation of these autonomous vessels in these crowded environments, and otherwise, requires the ability to actively and accurately foresee the future intent of neighboring entities and adjust their own trajectory accordingly to avoid collisions, governed by social norms. Inspired by the success of deep learning architectures, particularly LSTMs, in time forecasting problems and motivated by their inability to model social interactions, in this work we propose a novel deep learning model that is able to accurately predict intent of a vessel for the next few timesteps based on its history of positional data while taking into account neighboring vessels and involved social interactions.

Project - Team

Team Member Role Email Phone Number Academic Site/IAB
Cody Fleming PI cf5eg@virginia.edu (434) 924-7460 University of Virginia
Stephen Adams Co-PI sca2c@virginia.edu (757) 870-4954 University of Virginia
Jasmine Sekhon Student Not available Not available University of Virginia
Funded by: Leidos

Project - Deliverables

Deliverables
1 Literature review
2 Preliminary design of total solution
3 Software and associated artifacts
4 Final report

Project - Benefits to IAB

Since every autonomous agent needs to navigate through complex environments involving social interactions, our spatially and temporally attentive intent modeling architecture can be applied to model intent in other domains such as pedestrian intent modeling, automobiles and unmanned aerial vehicles. Apart from predicting intent of autonomous agents, the model can also explain why it made certain decisions and the effect of various neighbors and timesteps from history using hardwired and soft attention. Given that the use of deep learning in safety-critical applications is largely held back by the inability of deep learning models to explain themselves, such explainability can help achieve user trust and hence reap the performance benefits of using deep learning in safety-critical applications.

Project - Documents

FilenameFilesizeLast modified
7a.035.uva_ip_info_sheet.docx20.0 KiB2019/08/19 12:47
7a.035.uva_final_report.pdf611.5 KiB2019/08/19 12:43
7a.035.uva_year_7_mid-year_report_revised_01.04.2018.docx237.6 KiB2019/08/13 15:08
7a.035.uva_executive_summary.docx50.8 KiB2019/08/13 15:08
7a.035.uva_confluence_page_page.pdf169.1 KiB2019/08/13 15:08
7a.035.uva_year_7_cvdi_mid-year_report_original.docx237.4 KiB2019/08/13 15:08
7a.035.uva_2018_fall_meeting_poster.pptx155.6 KiB2019/08/13 15:08
projects/year7/7a.035.uva.txt · Last modified: 2021/06/02 17:25 by sally.johnson