Search Results
Results found for ""
- TAG: Terraforming Mars
< Back TAG: Terraforming Mars Link Author(s) RD Gaina, J Goodman, D Perez-Liebana Abstract More info TBA Link
- On the Evaluation of Procedural Level Generation Systems
< Back On the Evaluation of Procedural Level Generation Systems Link Author(s) O Withington, M Cook, L Tokarchuk Abstract More info TBA Link
- iGGi PhD
iGGi is a collaboration between Uni of York + Queen Mary Uni of London: the largest training programme worldwide for doing a PhD in digital games.
- An appraisal-based chain-of-emotion architecture for affective language model game agents
< Back An appraisal-based chain-of-emotion architecture for affective language model game agents Link Author(s) M Croissant, M Frister, G Schofield, C McCall Abstract More info TBA Link
- MCTS Pruning in Turn-Based Strategy Games.
< Back MCTS Pruning in Turn-Based Strategy Games. Link Author(s) YJ Hsu, DP Liebana Abstract More info TBA Link
- Progression in a language annotation game with a purpose
< Back Progression in a language annotation game with a purpose Link Author(s) C Madge, J Yu, J Chamberlain, U Kruschwitz, S Paun, M Poesio Abstract More info TBA Link
- Utilizing the Untapped Potential of Indirect Encoding for Neural Networks with Meta Learning
< Back Utilizing the Untapped Potential of Indirect Encoding for Neural Networks with Meta Learning Link Author(s) A Katona, N Lourenço, P Machado, DW Franks, JA Walker Abstract More info TBA Link
- iGGi PhD
iGGi is a collaboration between Uni of York + Queen Mary Uni of London: the largest training programme worldwide for doing a PhD in digital games.
- Novel video narrative from recorded content | iGGi PhD
Novel video narrative from recorded content Theme Creative Computing Project proposed & supervised by Nick Pears To discuss whether this project could become your PhD proposal please email: nick.pears@york.ac.uk < Back Novel video narrative from recorded content Project proposal abstract: In order to stimulate interest and engagement in games, it is important to give players a wide variety of video content that can provide scenario variations each time they engage with the game. However, creating a large volume of diverse video content manually is expensive and time consuming. This project aims to generate novel video narratives from recorded content with minimal human intervention. This requires automatic visual scene understanding that generates auto tagging of scene content and scene actions, either on a frame-by-frame or short clip basis. As well as understanding frame content, action segmentation strategies will be developed and evaluated. This will enable construction of short novel video narratives - for example, from a manually-defined storyline. Deep learning tools and techniques will be employed throughout this project. Supervisor: Nick Pears Based at:
- iGGi PhD
iGGi is a collaboration between Uni of York + Queen Mary Uni of London: the largest training programme worldwide for doing a PhD in digital games.
- iGGi PhD
iGGi is a collaboration between Uni of York + Queen Mary Uni of London: the largest training programme worldwide for doing a PhD in digital games.
- Places That Don't Exist | iGGi PhD
Places That Don't Exist Theme Immersive Technology Project proposed & supervised by William Smith To discuss whether this project could become your PhD proposal please email: william.smith@york.ac.uk < Back Places That Don't Exist Project proposal abstract: Imagine playing a video game inside your favourite movie, with scenes from the movie exactly recreated in all their detail. Or playing a game at a historical site, building or city that has since been destroyed, with photorealistic appearance as it would have appeared. The goal of this project is to combine state-of-the-art 3D computer vision and procedural content generation to create game-ready scene models and assets from movies, contemporary photos, plans or works of art. 3D reconstruction techniques such as structure-from-motion or deep monocular depth estimation can be used to reconstruct raw models of the observed part of the scene. Deep learning based methods will then be used to extrapolate and clean the models to produce complete scene layouts with photoreal textures. Sample References: https://github.com/skanti/scenecad https://github.com/nianticlabs/monodepth2 Supervisor: William Smith Based at: