Queen Mary University of London
I graduated from the University of California, Irvine with a BSc in Computer Game Science and a Minor in Statistics. My undergraduate thesis focused on augmenting Monte Carlo tree search with a value network trained through a self-play framework similar to AlphaZero. During my undergraduate degree, I became interested in the intersection of games and artificial intelligence—applying methods of reinforcement learning, graphical models, and knowledge representation to game playing and game design. My long-term goal is to work on the problem of formalizing game elements, representing game systems in a way that allows for automatic reasoning and inference. I also enjoy playing games where I can customize and theorycraft my playstyle to satisfy certain gameplay fantasies while beating the game.
My current research is within the field of Automated Game Design Learning, an emerging field in AI research with the purpose of learning game design models through playing. The current strategy is to play out the full game in thousands of iterations, which can be impractical for complex games with large state space and computationally expensive forward models. My research will focus on applying Go-Explore—a recent exploration paradigm that outperforms many state-of-the-arts—to improve the efficiency of automated playtesting of tabletop games by using an archive of interesting game states to reduce the time needed for self-play. The research will be primarily conducted within the TAG framework and aim to be game-agnostic. On successful completion, this research will improve game development cycles, resulting in higher-quality games, and potentially give unique insights into the game design process.