Stanford Intelligent Systems Laboratory
Autonomous systems, such as self-driving cars, are becoming tangible technologies that will soon impact the human experience. However, the desirable impacts of autonomy are only achievable if the underlying algorithms can handle the unique challenges humans present: People tend to defy expected behaviors and do not conform to many of the standard assumptions made in robotics. To design safe, trustworthy autonomy, we must transform how intelligent systems interact, influence, and predict human agents. In this work, we’ll use tools from robotics, artificial intelligence, and control to explore and uncover structure in complex human-robot systems to create more intelligent, interactive autonomy.
In this talk, I’ll present on robust prediction methods that allow us to predict driving behavior over long time horizons with very high accuracy. These methods have been applied to intervention schemes for semi-autonomous vehicles and to autonomous planning that considers nuanced interactions during cooperative maneuvers. I’ll also present a new framework for multi-agent perception that uses people as sensors to improve mapping. By observing the actions of human agents, we demonstrate how we can make inferences about occluded regions and, in turn, improve control. Finally, I’ll present on recent efforts on validating stochastic systems, merging deep learning and control, and implementing these algorithms on a fully equipped test vehicle that can operate safely on the road.