Research Scientist
DeepMind, London
Keywords: reinforcement learning, temporal abstraction, off-policy learning
News
- [December 2019] Excited to present our paper on Hindsight Credit Assignment as a spotlight at NeurIPS in Vancouver
- [August 2019] Had an amazing time preparing and teaching the deep reinforcement lecture at the DLRL summer school in Edmonton (video)
- [July 2019] Back to updating the website! Excited to travel to Montreal for RLDM in a few days to present the Termination Critic, and speak about inductive biases
- [January 2018] I completed my PhD and have joined DeepMind in London. My thesis can be found here.
- [December 2017] Our paper Learning with Options that Terminate Off-Policy won the best paper award at the Hierarchical Reinforcement Learning workshop at NIPS!
About
My work is towards designing principled reinforcement learning algorithms that are able to leverage structure, discover abstractions, and generalize across environments. I would also like to get rid of the primitive time step one day. Before joining DeepMind, I completed my PhD in 2017 at the AI lab of VU Brussel, working with Prof. Ann Nowe and Peter Vrancx on eclectic reinforcement learning algorithms. I was also involved in an assistive assistive exoskeleton project, where I was responsible for providing the high-level control, and doing a lot of prediction. Before this, I received my Masters degree at Oregon State University with the amazing Prof. Cora Borradaile, working on max flow algorithms in planar graphs.
My (likely outdated) CV can be found here, if you are into that kind of a thing.