If you are convinced that I was the person you were looking for, you should definitley take a look at my publications and software.
My previous research interests were on the area of multi robot and human robot coordination, for active sampling tasks.
My current research interests aim to produce a system in which an agent can transfer motor controllers learned in simulated or analogue environments to a target environment . From our experience, learning controllers from scratch in a real robot has high risk of damaging the robot. But more inportantly, it has a high cost in terms of human supervision and setup. Similar to how pilots first learn the basics of flight using a simulator, we want robots to adjust learned controllers as new data from the target environment comes in. We also want to give robots the ability to determine if such adjustment is possible: when it is not, the robot should use the data to improve learning on the simulator.
Lately, I've also been investigating the use of Bayesian neural networks for Model Based Reinforcement Learning and Inverse Reinforcement Learning. Using these type of networks, we can obtain control policies that are robust to noise and may generalize better than deterministic feedforward networks.
This is how some of our experiments with underwater robots look like