To fully integrate deep learning into robotics, it's important that deep learning systems can reliably estimate the uncertainty in their predictions. This allows robots to treat a deep neural network like any other sensor and use the established Bayesian techniques to fuse the network’s predictions with prior knowledge or other sensor measurements or to accumulate information over time.
Deep learning systems typically return scores from their softmax layers that are proportional to the system’s confidence. They are not calibrated probabilities and are not useable in a Bayesian sensor fusion framework.
Current approaches towards uncertainty estimation for deep learning are calibration techniques or Bayesian deep learning with approximations. Examples can include the Monte Carlo Dropout or ensemble methods.
Research topics in this area can focus on reliably extracting uncertainty, using Bayesian Deep Learning approaches, for object detection on a robot in open-set conditions. This is followed with using the uncertainty information to actively accumulate new knowledge about the environment. This can include asking a human for ground truth labels (active continuous learning).
We aim to develop new network architectures, training regimes and loss functions that allow deep learning models to express uncertainty.
We also aim to investigate how this uncertainty information can be used to make the application of deep learning more reliable in safety critical applications. This application can range from self-driving cars to domestic service robots.
Skills and experience
You should have a strong background and knowledge in programming in Python and linear algebra.
It's also a requirement that you understand the foundations of machine learning, computer vision and robotics.
You may be eligible to apply for a research scholarship.
Contact the supervisor for more information.