top of page

iGGi Research Retreat "Unconference" Group Outcomes

The Future of AI

The "Problem"

We discussed what the "future of AI" might look like, how it will change us as a society (for better and worse) and what possibilities it would create in the future.

What we did

As you can imagine, the "future of AI" is somewhat of a broad and undirected topic. Therefore in the morning we allowed free flowing conversations to see where it went and then towards the end tried to join up the threads into the things that we thought were the most worthy of further analysis and thought. In the afternoon we tackled the specifics of how to approach a game with emergent characters and stories, a topic oft dreamed of by game designers, but hitherto unattainable.

The "Outcome"

The morning discussions:

Our discussions were wide-ranging, but the opening question captured the essence of our inquiry. From personalised AI assistants we quickly moved to the broader economics of AI — circling back again and again to two themes: whether AI can ever replicate the human experience, and the friction between the utopian ideals we project onto it and the gravitational pull of capitalism.

I have sought to recount our discussion on these themes as accurately as memory allows, adding only modest(?) embellishment where it aids narrative coherence.


On AI and human interaction:

As someone quoted: “Do you know what it smells like in the Sistine Chapel?” (Good Will Hunting). The line reminds us that knowledge can be learned, but wisdom must be lived. Does an AI know what it is to be human? Can it ever truly understand the human experience? We are not just a collection of data points; we are a tapestry of emotions, experiences, and connections. AI can analyse patterns, but can it ever grasp the essence of what it means to be human?

Human art matters because it exposes something fragile. To create is to risk oneself: to bleed, to reveal, to offer a fragment of the human condition for others to recognise. AI can imitate the form, but form without risk is mimicry. Can imitation ever supply the soul? Perhaps all we truly crave, as a species, is to be seen — to connect with each other.

Yet history suggests authenticity is not always required. Chess engines long ago surpassed every human master, yet millions still prefer to watch people play. Calculators did not end mathematics; they expanded it, making it more ambitious and more accessible. Technology rarely erases human practice — but it does reframe it.

The question is not only whether we can connect with AI as with another human, but whether we will still insist on doing so. AI companions and “digital girlfriends” already suggest that some are content with machine-mediated intimacy. The unsettling prospect is not that AI lacks a soul, but that we may cease to care. What happens when a generation grows up regarding “connection enough” as something delivered by code? If we defer not only thought but also empathy, attention, and intimacy to our machines, what remains distinctly human? We shape our tools, and thereafter our tools shape us (Culkin).


On AI and economics:

AI will not escape the gravitational pull of commerce. As today’s internet is financed by advertising, it is inevitable that AI systems will be bent to the same imperatives — nudging our choices, steering our attention, and monetising our interactions. Already we see the first signs: an Alexa Show inserting shopping prompts directly into the home. For all our talk of AI safety and ethics, it is commerce that drives development.

But once human labour itself is displaced, what then? A utopia of leisure where we are free to follow our passions? Or a dystopia in which a minority, owning the means of cognition itself, consign the rest of us to redundancy? Could a society without labour even cohere — or would it demand a wholesale reinvention of politics, economy, and democracy itself?

As we tried to weave our threads together into something coherent, this was the question that seemed most fertile for further debate. What is the politics of AI? What would a political and economic system look like that could accommodate these changes? How do we ensure that the benefits of AI are shared equitably, rather than concentrating power and wealth in the hands of a few?


Our discussions were rich, unsettling, and illuminating, tracing both promise and peril. Yet the future owes no loyalty to our prophecies. The greater danger is not that AI will fail to know us, but that, in its shadow, we will lose the thread of ourselves.


In the afternoon:

Having solved the future of AI before lunch, we turned in the afternoon to the far more mundane task of reinventing the games industry.

One of the industries evergreen obsessions is how to make characters and narratives more believable. As games grow ever more immersive, the hunger for deeper storytelling only intensifies. Yet, despite extraordinary technical progress, this is the frontier we keep failing to cross — because the obstacle is less technical, and more human.

The costs of creating content are crushing. Large language models offer a tantalising shortcut: a machine that might spin endless dialogue, branching quests, even whole worlds on demand. But this promise comes bound up with limitations so profound they may be unsolvable with current methods. My own curiosity lies in a hybrid approach: using generative AI not as an all-purpose author, but as a tool to help construct traditional symbolic systems — frameworks that could give structure and coherence while still leaving room for human craft. If it worked, it might nudge us forward in this arena without the problems that come with the other approaches.

But this is not a conversation one can leap into lightly. It demands deep knowledge of how games are actually made, tested, and sold, as well as a sober reckoning with both the failures and the potential of LLMs. Much of our discussion was spent simply reaching the starting line. The groups diversity produced fresh perspectives, but the depth of the subject meant we could not advance far in the time available.

What did become clear was this: the problem is not just technical, it is communicative. If this debate is to progress, the challenge must be articulated in a way that is accessible beyond a narrow circle of experts. That, I realised, is the real work still ahead.

Post Script:

We live in a time of unprecedented change (at least in living memory). The world has enjoyed relative peace up until about 2020 but it feels like the geopolitical sands are shifting in a once-in-a-century phenomenon and such change has wide-ranging global political and economic implications for modern society broadly, but specifically within technology. This scenario is quite relevant to some of our discussions; the place and role of AI in our society is hard to gauge when our society is going through some fairly tectonic shifts. I think it will be a job for historians in the future to determine whether the emergence of advanced AI and these changes are correlation or coincidence, but it is clear that we cannot evaluate and analyse AI within a vacuum. The wider context is key here and that context is both nebulous and shifting all the time. Perhaps AI can support such a perspective in ways that one (or many) human minds cannot comprehend at once?

Previous
Next
  • Bluesky_Logo wt
  • LinkedIn
  • YouTube
  • mastodon icon white

Copyright © 2023 iGGi

Privacy Policy

The EPSRC Centre for Doctoral Training in Intelligent Games and Game Intelligence (iGGi) is a leading PhD research programme aimed at the Games and Creative Industries.

bottom of page