I am researching ways we can transfer knowledge between tasks in environments using Reinforcement Learning. I used GVGAI previously to evaluate how trained agents adapt to changes in the environment/task, you can read more about this work here. My current work focuses on learning goal conditioned policies that are capable to adapt to new tasks with less or no training, just by modulating the goal signals.
Malmo QMUL repository
I worked on improving the Malmo repository. Some of the main additions to the repository were adding a launcher to facilitate running malmo, updating the pip package, create a screen recorder and create symbolic wrappers. I wrote a few tutorials on how to get started with Malmo and use RLlib to train both single and multi-agent policies in the form of Jupyter notebooks. Along with the notebooks we also provide an RLlib checkpoint for both the single and multi-agent mobchase missions. The QMUL malmo repository can be found on github.
Tabletop Games Framework (TAG)
The Tabletop Games Framework (TAG) is a Java-based benchmark for developing modern board games for AI research. TAG provides a common skeleton for implementing tabletop games based on a common API for AI agents, a set of components and classes to easily add new games and an import module for defining data in JSON format. At present, this platform includes the implementation of seven different tabletop games that can also be used as an example for further developments. Additionally, TAG also incorporates logging functionality that allows the user to perform a detailed analysis of the game, in terms of action space, branching factor, hidden information, and other measures of interest for Game AI research. You can find the repository here and the paper that describes the design here
I was involved in creating the Java version of the Pommerman framework. The framework is available here. We also evaluated Statistical Forward Planning methods in the framework, you can read the paper here. Screenshot from the game’s GUI is on the right.
Game parameter search using MAP-elites
We used a simple game called CaveSwing to tune the game’s hyper-parameters using the MAP-Elites algorithm. Using MAP-elites we could find levels that the agent played with different behaviours. The heatmap below shows the result of a run of MAP-elites when observing the average height the agent takes to solve the level in combination with the average speed the agent takes to finish the level. The code is available on Github here.
As part of the first year training in IGGI we had to design and develop a game using Unity in a week. We came up with a First-person, split-screen game where the players have to grapple to avoid falling into the lava. The game features multiple weapon types and game modes: Capture the flag and deathmatch. You can check the gameplay in the video below: