Email: anna -at- harutyunyan -dot- net
Research keywords: reinforcement learning, reward shaping, off-policy learning, time series prediction
- [November 2017] Our two option papers Learning with Options that Terminate Off-Policy and Reinforcement Learning in POMDPs with Memoryless Options and Option-Observation Initiation Sets were accepted to AAAI!
- [March 2017] I had the pleasure of visiting SequeL (INRIA Lille) and speaking about reward shaping
- [August 2016] Our paper Safe and Efficient Off-Policy Reinforcement Learning was accepted at NIPS!
- [August 2016] I had a great time in Montreal attending the Deep learning summer school, and visiting the Reasoning and Learning Lab
- [June 2016] Our paper Q(lambda) with Off-Policy Corrections was accepted at ALT — I am looking forward to presenting it in October!
- [January 2016] Our applied paper Predicting Seat-Off and Detecting Start-of-Assistance Events for Assisting Sit-to-Stand with an Exoskeleton was accepted for publication in IEEE Robotics and Automation Letters
- [September 2015] I will be spending this fall in London, interning at Google DeepMind
- [May 2015] Our paper Off-Policy Reward Shaping with Ensembles received the FoCaS Best Paper award at the ALA workshop at AAMAS 2015
I am a PhD student at the AI lab of VU Brussel, working with Prof. Ann Nowe. My research is primarily in reinforcement learning. I am interested in designing and analyzing efficient and scalable learning architectures, which requires many ingredients, such as reward shaping, ensembles and off-policy learning. I am also involved in an assistive exoskeleton project, where I am responsible for providing the high-level control, and where I do a lot of prediction.
Before this, I received my Masters degree at Oregon State University with the amazing Prof. Cora Borradaile, working on max flow algorithms in planar graphs.
My CV is can be found here, if you are into that kind of a thing.