Placeholder image
Placeholder image


Under Construction

Placeholder image

Autonomous agents have already gained the capability to perform individual tasks such as object detection and lane following, especially in less cluttered and static environments. However, for robots to safely operate in the real-world, it is vital for them to understand the dynamics of the world around them and to model its uncertainty. Furthermore, the robots have to gain this experience with a minimal amount of data and time while adapting to previously unseen phenomena. The focus of my research is to develop data-efficient algorithms for modeling uncertainty in dynamic and unstructured environments, and using that uncertainty for decision-making. To achieve this goal, working at the intersection between robotics and machine learning, I am interested in utilizing and developing statistical machine learning and deep learning tools, to safely operate robots in ever-changing environments while providing theoretical and/or empirical guarantees. The main thrusts of my research are:

  1. Scalable uncertainty quantification in modeling/perception
  2. Decision-making under uncertainty
  3. Probabilistic domain adaptation for small-data regimes

Scalable Uncertainty Quantification in Perception

Temporal variations of spatial processes exhibit highly nonlinear patterns and modeling them is vital in many disciplines. For instance, robots operating in dynamic environments demand richer information for safer and robust path planning. In order to model these spatiotemporal phenomena, I develop and utilize theory from reproducing kernel Hilbert space (RKHS), approximate Bayesian inference such as stochastic variational inference, scalable Gaussian processes, deep learning architectures, and directional statistics to model nonlinear patterns and uncertainty. I quantify both model and data uncertainty in small and big data settings.

Uncertainty in Occupancy

Uncertainty in Directions

Estimating the directions of moving objects or flow is useful for many applications. I am mainly interested in modeling the multimodal aleatoric uncertainty associated with directions. Summary:

  • Developing a multimodal directional mapping framework [IROS’18]
  • Extending it to model spatiotemporal directional changes [RA-L'19]
  • Developing the Directional Primitive framework for incorporating prior information [ITSC'20]
Representative paper: Directional Primitives for Uncertainty-Aware Motion Estimation in Urban Environments
Placeholder image

Uncertainty in Velocity

With the advancement of efficient and intelligent vehicles, future urban transportation systems will be filled with both autonomous and human-driven vehicles. Not only we will have driverless cars on roads but also we will have delivery drones and urban air mobility systems with vertical take-off and landing capability. How can we model the epistemic uncertainty associated with the velocity and acceleration of vehicles in 3D large-scale transportation systems?

  • Modeling global and local environment dynamics in large-scale transportation systems
Placeholder image

Predicting Future in Space and Time

Humans subconsciously predict how the space around them evolves to make reliable decisions. How can robots predict what would happen in the next few seconds around them? Summary:

  • Proposing kernel methods for propagating uncertainty into the future, especially for static ego-agents [NeurIPS’16, ICRA’19]
  • Predicting the future occupancy maps using ConvLSTMs for moving vehicles in urban settings [arXiv’20]
Representative paper: Double-Prong ConvLSTM for Spatiotemporal Occupancy Prediction in Dynamic Environments
Placeholder image

Safe Decision-Making under Uncertainty

Because we can't have an ideal model of the environment, robots should take into account uncertainty when making decisions for safe and robust operation. They should consider multiple, if not all, hypotheses when making decisions. For this purpose, I work on propagating uncertainty from perception into decision-making while taking into account the uncertainty of states, dynamic models, etc. For decision-making under uncertainty, I make use of imitation policy learning algorithms, partially observable Markov decision processes, model predictive control algorithms, and Bayesian optimization.

Stochasticity of Human Policies

When humans operate in environments in which they have to follow some rules, as in driving, we cannot expect them to perfectly adhere to rules. Therefore, it is important to take into account this intrinsic stochasticity when making decisions. We specifically focus on developing uncertainty-aware intelligent driver models that are invaluable for planning in autonomous vehicles as well as validating their safety in simulation. Summary:

  • Modeling human driving behavior through generative adversarial imitation learning [arXiv'20]
  • Augmenting rule-based intelligent driver behavior models with data-driven stochastic parameter estimation for driving on highways [ACC'20]
  • Merging into highways while cooperating with other drivers
Placeholder image

State and Observation Uncertainty

When robots operate in real-world environments they do not have access to all the information they require to make safe decisions. Some information might even be partially observable and highly uncertain. How can agents equipped with sequential decision-making algorithms account for partial observability? Summary:

  • Using partially observable Markov decision processes (POMDPs) for planning.
  • Scaling up POMDPs for large state/observation spaces
Placeholder image

Exploring Unknown Environments

When a robot enters a new environment it needs to explore the environment and build a map for future use. Just think about what Romba does when we operate it in our home for the first time. Robots can also be used for gathering task-specific information such as search and rescue, subjected to constraints. Summary:

  • Using perception uncertainty for exploring environments by avoiding hazards
Placeholder image

Probabilistic Domain Adaptation for Small-Data Regimes

Considering uncertainty in perception and decision-making helps to operate robots robustly. However, if we want to further improve the robustness, especially when operating in new domains, our algorithms should be adaptive. Therefore, the uncertainty quantification algorithms and uncertainty-aware decision-making algorithms should swiftly adapt their probability distributions to environmental changes and new scenarios. We utilize theory from stochastic variational inference, black-box variational inference, kernel methods, deep generative models, and optimal transport to develop adaptive probabilistic models.

Spatial Nonstationarity

When we develop a spatial representation the parameters should adapt to the local spatial changes. For instance, in kernelized occupancy mapping, the kernel parameters should be adapted according to the level of occupancy. Given that the probabilistic models are already complicated, learning nonstationarity is challenging.

  • Learning distributions over parameters and hyperparameters to model nonstationarity [CoRL'18]
  • Learning kernels to capture complex spatiotemporal patterns [AAAI'16,AISTAS'19]
Placeholder image

Domain Adaptation

Inferring parameters in probabilistic models can sometimes be computationally expensive. When a robot changes its operating domain, can we adapt the parameters to the new domain instead of learning from scratch?

  • Quickly adapting probability distributions defined over parameters when changing from one domain to another domain [RSS'20]
Placeholder image

Out-of-Distribution Samples

OOD. Summary:

Placeholder image

Under Construction

Eye-Hand Coordination

Human Factors in HCI

  • Computer Vision
  • TBC
Medical Image Analysis

Out of distribution detection (OOD)

  • PhotoMicroscopy
  • Malaria detection through blood smears