Search Results
Results found for empty search
- Prof Nick Pears
< Back Prof. Nick Pears University of York Supervisor Nick Pears is a Professor of Computer Vision in York’s Vision, Graphics and Learning (VGL) research group. He works on statistical modelling of 3D shapes, with an emphasis on the human face and head. The Liverpool-York Head Model and the associated Headspace training set has been downloaded by over 100 research groups internationally, with the Universal Head Model being downloaded by 50 research groups. His most recent work with his PhD students has focused on semantic disentanglement of 3D images and how to make autonomous vehicles safer and more trustworthy when using computer vision systems. He is assessor for many PhDs including construction of generative models for novel video content using adversarial deep learning techniques. nick.pears@york.ac.uk Email Mastodon https://www-users.cs.york.ac.uk/np7/ Other links Website https://www.linkedin.com/in/nick-pears-90970312/ LinkedIn BlueSky Github Themes Creative Computing Game AI - Previous Next
- Cristina Dobre
< Back Dr Cristina Dobre Goldsmiths iGGi Alum Cristina Dobre has a background in Mathematics and Computing receiving distinction in her undergraduate degree in Computer Science. My current focus is on the nonverbal cues that influence and shape the social interaction in immersive VR environments. More broadly, I'm investigating autonomous agents (or virtual humans) in social settings in terms of non-verbal interactions with users. I'm interested in the underlying mechanics of social interaction that help developing an emphatic and engaging virtual human. At the moment, I'm working on ML models based on multimodal datasets to detect various social cues (such as gaze) or various human-defined social attitudes (such as engagement) in social interactions in VR. I'm also interested in generating more complex behaviour for virtual characters (NPCs) that will improve the user's experience with the NPCs in a social VR setting. Designing communication and other social interactions in immersive VR can be a challenging task, and aspects on this are addressed in my research. The findings from these studies can help game designers and game developers determine the appropriate non-player character's non-verbal (and verbal) behaviour in games, especially in VR games. Along with its applications in the games industry, the findings would be useful for other applications such as designing multi-modal human-machine interactions and other systems for medical purposes, for social anxiety disorders therapy, simulations, training or learning. cristina.dobre@uni-a.de Email https://hci.social/@ShesCristina Mastodon Other links Website https://linkedin.com/shesCristina LinkedIn BlueSky https://www.github.com/shesCristina Github Featured Publication(s): Social Interactions in Immersive Virtual Environments: People, Agents, and Avatars Rolling Horizon Co-evolution in Two-player General Video Game Playing Using machine learning to generate engaging behaviours in immersive virtual environments More than buttons on controllers: engaging social interactions in narrative VR games through social attitudes detection Nice is Different than Good: Longitudinal Communicative Effects of Realistic and Cartoon Avatars in Real Mixed Reality Work Meetings Immersive Machine Learning for Social Attitude Detection in Virtual Reality Narrative Games Direct Gaze Triggers Higher Frequency of Gaze Change: An Automatic Analysis of Dyads in Unstructured Conversation Themes Game AI Immersive Technology - Previous Next
- Oliver Scholten
< Back Dr Oliver Scholten University of York iGGi Alum Oliver Scholten is working on understanding the use of cryptocurrency technologies for gambling and gaming. His work provides researchers with the tools and context needed to understand player behaviours in these technologically advanced domains. He is the creator of gamba - a python library designed to enable quick replication of existing player behaviour tracking studies. He has also published several peer reviewed articles, and had written evidence published by the UK House of Lords which describes the mechanics behind decentralised gambling applications. As a PhD student, his thesis focuses on decoding and analysing cryptocurrency gambling and cryptocurrency gaming transactions. These transactions offer a more granular insight for researchers into both gambling and gaming than has been historically possible, this work therefore lays the foundations for explorations across different schools of research, and more specifically, advanced player transaction analytics. Please note: Updating of profile text in progress oliver@gamba.dev Email Mastodon https://www.ojscholten.com Other links Website https://www.linkedin.com/in/ojscholten LinkedIn BlueSky https://github.com/ojscholten Github Featured Publication(s): On the Evaluation of Procedural Level Generation Systems On the Behavioural Profiling of Gamblers Using Cryptocurrency Transaction Data Inside the decentralised casino: A longitudinal study of actual cryptocurrency gambling transactions Decentralised Gambling Overview Decentralised Gambling: The York Combined Transaction Set Unconventional Exchange: Methods for Statistical Analysis of Virtual Goods Utilising VIPER for Parameter Space Exploration in Agent Based Wealth Distribution Models Ethereum Crypto-Games: Mechanics, Prevalence, and Gambling Similarities Themes Game Data - Previous Next
- Dr Paulo Rauber
< Back Dr Paulo Rauber Queen Mary University of London Supervisor I am a lecturer in Artificial Intelligence at Queen Mary University of London. Before becoming a lecturer, I was a postdoctoral researcher in the Swiss AI lab working on reinforcement learning under the supervision of Jürgen Schmidhuber. I believe that intelligence should be defined as a measure of the ability of an agent to achieve goals in a wide range of environments, which makes reinforcement learning an excellent framework to study many challenges that intelligent agents are bound to face. p.rauber@qmul.ac.uk Email Mastodon https://paulorauber.com/ Other links Website LinkedIn BlueSky https://github.com/paulorauber Github Themes Game AI - Previous Next
- Emily Marriott
< Back Emily Marriott University of Essex iGGi Alum Automated Story Generation for Games Emily is researching automated story generation for video games, focusing on the use of Planning for real-time, dynamic generation. Ideally, the stories created will reflect choices made by the player during gameplay and will update continually throughout gameplay. The aim of this research is to create a system that could be easily utilised in the development of more adaptive games. This could improve player enjoyment, increase re-playability, and allow for the inclusion or exclusion of content that may only appeal to niche audiences. Emily’s current focus is on investigating story structures and pacing to create a template for generating good stories specifically for games that are consistent, well-structured and interesting. This involves studying the pacing requirements in existing games to establish what these are and how they differ the requirements for film and TV. The system will ideally be integrated with existing game-development tools and provide an easy-to-use interface to make the creation of adaptive games easier and quicker. The eventual goal is a full story-generation system would support both the creation of quests that emerge from story requirements and a game world that fits the environment required for the story. Emily graduated from Glyndŵr University with a BSc in Computer Games Development before completing an MSc in Computer Science at Oxford Brookes University. The substance of the MSc dissertation involved generating dungeon levels and quests using grammars based on the play style the player appeared to favour. Emily enjoys playing both tabletop and computer roleplaying games, especially ones in which player actions can have a dramatic effect on the game’s progression. Please note: Updating of profile text in progress Email Mastodon Other links Website LinkedIn BlueSky Github Themes Player Research - Previous Next
- Memo Akten
< Back Dr Memo Akten Goldsmiths iGGi Alum Real-time, interactive, multi-modal media synthesis and continuous control using generative deep models for enhancing artistic expression Real-time, interactive, multi-modal media synthesis and continuous control using generative deep models for enhancing artistic expression. This research investigates how the latest developments in Deep Learning can be used to create intelligent systems that enhance artistic expression. These are systems that learn – both offline and online – and people interact with and gesturally ‘conduct’ to expressively produce and manipulate text, images and sounds. The desired relationship between human and machine is analogous to that between an Art Director and graphic designer, or film director and video editor – i.e. a visionary communicates their vision to a ‘doer’ who produces the output under the direction of the visionary, shaping the output with their own vision and skills. Crucially, the desired human-machine relationship here also draws inspirations from that between a pianist and piano, or a conductor and orchestra – i.e. again a visionary communicates their vision to a system which produces the output, but this communication is real-time, continuous and expressive; it’s an immediate response to everything that has been produced so far, creating a closed feedback loop. The key area that the research tackles is as follows: Given a large corpus (e.g. thousands or millions) of example data, we can train a generative deep model. That model will hopefully contain some kind of ‘knowledge’ about the data and its underlying structure. The questions are: i) How can we investigate what the model has learnt? ii) how can we do this interactively and in real-time, and expressively explore the knowledge that the model contains iii) how can we use this to steer the model to produce not just anything that resembles the training data, but what *we* want it to produce, *when* we want it to produce it, again in real-time and through expressive, continuous interaction and control. Memo Akten is an artist and researcher from Istanbul, Turkey. His work explores the collisions between nature, science, technology, ethics, ritual, tradition and religion. He studies and works with complex systems, behaviour, algorithms and software; and collaborates across many disciplines spanning video, sound, light, dance, software, online works, installations and performances. Akten received the Prix Ars Electronica Golden Nica in 2013 for his collaboration with Quayola, ‘Forms’. Exhibitions and performances include the Grand Palais, Paris; Victoria & Albert Museum, London; Royal Opera House, London; Garage Center for Contemporary Culture, Moscow; La Gaîté lyrique, Paris; Holon Design Museum, Israel and the EYE Film Institute, Amsterdam. Please note: Updating of profile text in progress memo@memo.tv Email Mastodon Other links Website LinkedIn BlueSky Github Featured Publication(s): Top-Rated LABS Abstracts 2021 Deep visual instruments: realtime continuous, meaningful human control over deep neural networks for creative expression Deep Meditations: Controlled navigation of latent space Learning to see: you are what you see Calligraphic stylisation learning with a physiologically plausible model of movement and recurrent neural networks Mixed-initiative creative interfaces Learning to see Real-time interactive sequence generation and control with Recurrent Neural Network ensembles Collaborative creativity with Monte-Carlo Tree Search and Convolutional Neural Networks Sequence generation with a physiologically plausible model of handwriting and Recurrent Mixture Density Networks Deepdream is blowing my mind All watched over by machines of loving grace: Deepdream edition Realtime control of sequence generation with character based Long Short Term Memory Recurrent Neural Networks Themes Game AI - Previous Next
- Dr William Smith
< Back Dr William Smith University of York Supervisor William Smith is a Reader in the Computer Vision and Pattern Recognition research group in the Department of Computer Science at the University of York. He is currently a Royal Academy of Engineering/The Leverhulme Trust Senior Research Fellow and an Associate Editor of the journal Pattern Recognition. His research interests span vision, graphics and ML. Specifically, physics-based and 3D computer vision, shape and appearance modelling and the application of statistics and machine learning to these areas. The application areas in which he most commonly works are face/body analysis and synthesis, surveying and mapping, object capture and inverse rendering. A wide variety of tools and areas of maths are often useful in his research such as: convex optimisation, nonlinear optimisation, manifold learning, learning/optimisation on manifolds, computational geometry and low level computer vision (e.g. features and correspondence). He leads a team of five PhD students and one postdoc and has published over 100 papers, many in the top conferences and journals in the field. He was General Chair for the ACM SIGGRAPH European Conference on Visual Media Production in 2019 and is Program Chair for the British Machine Vision Conference in 2020. Research themes: Game AI Game Design Computational Creativity Graphics and rendering Content creation william.smith@york.ac.uk Email Mastodon https://www-users.cs.york.ac.uk/wsmith/ Other links Website https://www.linkedin.com/in/william-smith-b5421a70/ LinkedIn BlueSky https://github.com/waps101 Github Themes Creative Computing Design & Development Game AI Player Research - Previous Next
- David Gundry
< Back Dr David Gundry University of York iGGi Alum Using Applied Games to Motivate Speech Without Bias (Industry placement Lightspeed Research) Eliciting linguistic data faces several difficulties such as investment of researcher time and few available participants. Because of this, many language elicitation studies have to make do with few subjects and coarse sampling rates (measured in months). It would be ideal if a game could crowd-source relevant linguistic data with frequent, short game sessions. To this end, David’s research is looking into how games shape and elicit players’ linguistic behaviour. The established design patterns of gamification do not apply to a domain that lacks a ‘correct’ answer like language or personal beliefs and attitudes. David’s research shows how a player’s strategic goals will systematically bias data collection. It also shows how to design around this. The conclusion: The player’s choice of how to express a given datum must be strategically irrelevant in the game. David can remember the halcyon days when he had the free time to play games. Now he’s doing a PhD and has a one-year-old. He has an background in linguistics. He loves writing expressive code and designing clever little games. He wants to show that research games can be fun, not just effective. Please note: Updating of profile text in progress Email Mastodon Other links Website LinkedIn BlueSky Github Featured Publication(s): Trading Accuracy for Enjoyment? Data Quality and Player Experience in Data Collection Games Designing Games to Collect Human-Subject Data Validity threats in quantitative data collection with games: A narrative survey Busy doing nothing? What do players do in idle games? Intrinsic elicitation: A model and design approach for games collecting human subject data Themes Applied Games - Previous Next
- Prof Richard Bartle
< Back Prof. Richard Bartle University of Essex iGGi Co-Investigator Supervisor Richard Bartle is a renowned pioneer in game design and research. He co-wrote the first virtual world, MUD ("Multi-User Dungeon") in 1978, and has thus been at the forefront of the online games industry from its very inception. He is an influential writer on all aspects of virtual world design, development, and management. As an independent consultant, he has worked with many of the major online game companies in the U.K. and the U.S. over the past 30 years. His 2003 book, Designing Virtual Worlds , has established itself as a foundation text for researchers and developers of virtual worlds alike. His Player Type theory is taught in game design programmes worldwide (he appears in examination questions!). His interests are directed mainly virtual worlds, particularly Massively Multiplayer Online Role-Playing Games (MMORPGs, or MMOs), but cover all aspects of game design. He is keen to see AI used for non-player characters in MMOs (his PhD is in AI), and his current work considers the long-term moral and ethical implications of this. They’re maybe not what you might think they were at first glance… rabartle@essex.ac.uk Email Mastodon https://mud.co.uk/richard/ Other links Website https://www.linkedin.com/in/richardbartle/ LinkedIn BlueSky Github Themes Design & Development Game AI Player Research - Previous Next
- Erin Robinson
< Back Erin Robinson University of York iGGi PG Researcher Erin Robinson is a multimedia artist, experimental musician and PhD Researcher from London. Her work primarily involves the design of interactive installations, where she takes a participatory approach to evolving visual-scapes, but also takes form in fixed media, sound art, free improvisation, live visuals and immersive experiences. Her work critically engages with the concepts of posthumanism and postmodernism, exploring notions of authenticity and existence in the digital anthropocene by blurring lines between organic and non-organic entities, reality and virtuality, self and otherness. She is a founding member of SubPhonics, an experimental music and sound art collective based in London. Recent works include ‘Flora_Synthetica’, shown at Peckham Digital 2024, and ‘Pluriversal Perspectives: Moss’, shown at the South London Botanical Institute and Conference for Designing Interactive Systems (Copenhagen) 2024. A description of Erin's research: "My research adopts a practice-based approach to exploring participant-contributed materials, a technique positioned at the intersection of participatory and new media arts. This interactive technique enables participants to contribute aesthetic and semiotic materials to new media artworks through open forms of interaction, including but not limited to, text input, drawing, and video feed. Although both participatory and new media artistic practices involve audience engagement, traditional interactive media often impose restrictive computational frameworks. In contrast, participatory practices, typically conducted in person, allow participants greater freedom, resulting in deeper engagement and more diverse, unexpected outcomes that reflect the audience's perspectives and behaviours. This research underscores the potential of digital artworks to provide more expansive and identity-reflecting experiences by incorporating participant-contributed materials. By using strengths of participatory practices, digital artworks can achieve a richer and more personalised form of interaction, and meaningful engagement with audiences." erin.robinson@york.ac.uk Email Mastodon Other links Website LinkedIn BlueSky https://github.com/erinrrobinson Github Supervisor(s): Prof. Sebastian Deterding Themes Design & Development Immersive Technology - Previous Next
- Helen Tilbrook
< Back Helen Tilbrook University of York iGGi Administrator iGGi Admin iGGi Administrator at York helen.tilbrook@york.ac.uk Email Mastodon Other links Website LinkedIn BlueSky Github Themes - Previous Next
- Dr Anna Bramwell-Dicks
< Back Dr Anna Bramwell-Dicks University of York Supervisor Anna Bramwell-Dicks has an interdisciplinary background which started in Electronics and Music Technology before taking a sideways move to the field of Human-Computer Interaction research. She likes to combine her underlying interest in sound and music with applied psychology and creativity. She is very interested in research involving multimodal interaction (e.g. using audio, haptics, smell and/or proprioception as well as visuals within interfaces) particularly where audio is used to affect user’s behaviour or experiences. She is also very interested in accessibility research and any research in the application area of mental health and mental illness. As a lecturer in Web Development and Interactive Media, based in TFTI, Anna is always interested in work that involves designing and evaluating novel and interesting user experiences, particularly where that leads to the option to create fun, engaging, accessible experiences. She likes to work across a range of application areas ranging from learning environments to e-commerce to escape rooms and cultural exhibits! Anna is keen to work with students who want to design and develop gamified systems to support people with disabilities, physical or mental illness. Or, those who are also interested in multimodal experiences. Research themes: Accessibility Multimodal and multisensory systems Research methods anna.bramwell-dicks@york.ac.uk Email Mastodon Other links Website https://www.linkedin.com/in/anna-bramwell-dicks-2b941a28/ LinkedIn BlueSky Github Themes Accessibility Applied Games Design & Development Game Audio Player Research - Previous Next













